We removed the thread local core worker instance in this PR, which is the further arch cleaning stuff for removing multiple workers in one process.
It also removes the unnecessary parameter `workerId` from JNI.
The main issue with this test is that the worker is trying to connect to the raylet but the raylet exits, and in this case, it'll hang there. This happens before the periodical check runs so the worker won't exit as well.
This fix moves the hanging part to the place after the periodical check starts.
Another issue is the pubsub timeout. The default one is 60s, and we need to adjust it to smaller value to make it work within 60s for the test.
The Datasets UX assessment showed that users had difficulties in writing UDFs: what's input/output types, how to write the function etc.
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
Looks like the test_logging fails when syncer is enabled. However, I found the test was badly written, and the failure might be a side effect of syncer (I am not sure why. Maybe syncer slows down ray.init()?)
ray/python/ray/tests/test_logging.py
Line 228 in f75ede1
def test_log_monitor_backpressure(ray_start_cluster, monkeypatch):
Anyway, it seems like the test will fail if there's a delay after log monitor is started.
Testing this is not trivial. Instead, I made log_monitor unit testable and added full unit tests.
This also adds a better exception message on another flaky test test_log_rotation . I need more data before actually fixing this issue.
The current implementation of amp does not work if the model that is being wrapped defines a custom __getstate__ method. It would fail at the assertion like here: https://discuss.ray.io/t/ray-train-hangs-for-long-time/6333/7.
This PR fixes amp for this case, and adds tests for it.
Duplicate for #25247.
Adds a fix for Dask-on-Ray. Previously, for tasks with multiple return values, we implicitly allowed returning a dict with the return index as the key. This was used by Dask-on-Ray, but this is not documented behavior, and we now require task returns to be iterable instead.
We're currently installing matching wheels on the fly in the python script for Ray client tests. However, we can't reload modules with changed protobuf configurations, and thus can't reload ray completely. Since the `anyscale` pacakge depends on Ray, this effectively prevents us from installing matching wheels within the python script.
There are a few possible solutions to this. First, we could separate out the local environment preparation from the test running - this will duplicate some logic and is thus a bit more involved, but should be considered in the future. For now, we adjust the `run_release_tests.sh` shell script to install any passed `--ray-wheels` wheels locally. We only do this in CI instances per default as these wheels will not be compatible with e.g. MacOS.
Link to successful build: https://buildkite.com/ray-project/release-tests-branch/builds/619#_
Bazel operates by simply running the python scripts given to it in `py_test`. If the script doesn't invoke pytest on itself in the `if _name__ == "__main__"` snippet, no tests will be ran, and the script will pass. This has led to several tests (indeed, some are fixed in this PR) that, despite having been written, have never ran in CI. This PR adds a lint check to check all `py_test` sources for the presence of `if _name__ == "__main__"` snippet, and will fail CI if there are any detected without it. This system is only enabled for libraries right now (tune, train, air, rllib), but it could be trivially extended to other modules if approved.
In this PR we simulate the case where serve can continue to function even when GCS is down and the reconfig continue to work once GCS is back.
To make it close to the real-world case, the docker is used for isolation:
It starts a head node (0 cpus) and a worker node
It tried the basic function and make sure it's working
It kills GCS and make sure everything is working.
It starts GCS and make sure reconfig continues to work.
This is the basic cases for serve HA. We'll add more once we get better integrations.
This PR
- enables piping of autoscaler events to the driver's stdout with KubeRay
- cleans up the autoscaler's startup sequence
- removes some redis references
Turns out https://github.com/ray-project/ray/pull/25342 wasn't the root cause of the ray shutdown flakiness. I realized there's another PR that could affect this test suite. Let's try reverting it and see if things get better.
This PR allows the user to override the global default for max_retries for non-actor tasks. It adds an OS env called RAY_task_max_retries which can be passed to the driver or set with runtime envs. Any future tasks submitted by that worker will default to this value instead of 3, the hard-coded default.
It would be nicer if we could have a standard way of setting these defaults, but I think this is fine as a one-off for now (not a clear need for overriding defaults of other @ray.remote options yet).
Related issue number
Closes#24854.
Adds support for Python generators instead of just normal return functions when a task has multiple return values. This will allow developers to cut down on total memory usage for tasks, as they can free previous return values before allocating the next one on the heap.
The semantics for num_returns are about the same as usual tasks - the function will throw an error if the number of values returned by the generator does not match the number of return values specified by the user. The one difference is that if num_returns=1, the task will throw the usual Python exception that the generator cannot be pickled.
As an example, this feature will allow us to reduce memory usage in Datasets shuffle operations (see #25200 for a prototype).
This PR attempts to improve `BatchPredictor` performance with directory checkpoints by avoiding unnecessary filesystem operations.
In order to achieve that, the `Checkpoint` class is changed to always use a canonical path for the temporary directory if the Checkpoint has been created form an object ref. The directory is filelocked to prevent concurrent writes.
Tests have been addded.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Why are these changes needed?
Today, Ray scheduler always pick a random node if the resource requirement is empty, regardless of scheduling policy/strategy.
However, for node affinity scheduling policy, we should not pick random policy but try to stick to the node affinity constraints.