[minor] Fix legacy OSS operator test (#23540)

A legacy K8s test fails due to incorrect usage of @ray.method which only started raising errors after the Ray 1.12.0 branch cut.
This PR removes the use of @ray.method in the test.

Some context in #23271 and #23471

In addition, I noticed some of the test were flakey due to out-of-memory issues. For that reason, I've doubled the memory request and limits in the legacy operator's example files.

I've also added CPU limits in an example file that was missing them -- it makes the most sense for consistency with Ray's resource model to use CPU limits in K8s configs.

Finally, I added an extra note to the instructions for running the tests.
This commit is contained in:
Dmitri Gekhtman 2022-04-18 17:47:42 -07:00 committed by GitHub
parent ea66192a38
commit fc4ac71deb
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 12 additions and 10 deletions

View file

@ -19,7 +19,7 @@ podTypes:
CPU: 1
# memory is the memory used by this Pod type.
# (Used for both requests and limits.)
memory: 512Mi
memory: 1Gi
# GPU is the number of NVIDIA GPUs used by this pod type.
# (Optional, requires GPU nodes with appropriate setup. See https://docs.ray.io/en/master/cluster/kubernetes-gpu.html)
GPU: 0
@ -49,7 +49,7 @@ podTypes:
maxWorkers: 3
# memory is the memory used by this Pod type.
# (Used for both requests and limits.)
memory: 512Mi
memory: 1Gi
# CPU is the number of CPUs used by this pod type.
# (Used for both requests and limits. Must be an integer, as Ray does not support fractional CPUs.)
CPU: 1

View file

@ -68,9 +68,10 @@ spec:
resources:
requests:
cpu: 1000m
memory: 512Mi
memory: 1Gi
ephemeral-storage: 1Gi
limits:
cpu: 1000m
# The maximum memory that this pod is allowed to use. The
# limit will be detected by ray and split to use 10% for
# redis, 30% for the shared memory object store, and the
@ -78,7 +79,7 @@ spec:
# the object store size is not set manually, ray will
# allocate a very large object store in each pod that may
# cause problems for other pods.
memory: 512Mi
memory: 1Gi
- name: worker-node
# Minimum number of Ray workers of this Pod type.
minWorkers: 2
@ -114,9 +115,10 @@ spec:
resources:
requests:
cpu: 1000m
memory: 512Mi
memory: 1Gi
ephemeral-storage: 1Gi
limits:
cpu: 1000m
# The maximum memory that this pod is allowed to use. The
# limit will be detected by ray and split to use 10% for
# redis, 30% for the shared memory object store, and the
@ -124,7 +126,7 @@ spec:
# the object store size is not set manually, ray will
# allocate a very large object store in each pod that may
# cause problems for other pods.
memory: 512Mi
memory: 1Gi
# Commands to start Ray on the head node. You don't need to change this.
# Note dashboard-host is set to 0.0.0.0 so that Kubernetes can port forward.
headStartRayCommands:

View file

@ -299,7 +299,6 @@ class KubernetesOperatorTest(unittest.TestCase):
@ray.remote
class Test:
@ray.method()
def method(self):
return "success"

View file

@ -7,9 +7,10 @@ If you have issues running them, bug the code owner(s) for OSS Kubernetes suppor
1. Configure kubectl and Helm 3 to access a K8s cluster.
2. `git checkout releases/<release version>`
3. You might have to locally pip install the Ray wheel for the relevant commit (or pip install -e) in a conda env, see Ray client note below.
4. cd to this directory
5. `IMAGE=rayproject/ray:<release version> bash k8s_release_tests.sh`
6. Test outcomes will be reported at the end of the output.
4. You might have to temporarily delete the file `ray/python/ray/tests/conftest.py`.
5. cd to this directory
6. `IMAGE=rayproject/ray:<release version> bash k8s_release_tests.sh`
7. Test outcomes will be reported at the end of the output.
This runs three tests and does the necessary resource creation/teardown. The tests typically take about 15 minutes to finish.