In this PR we simulate the case where serve can continue to function even when GCS is down and the reconfig continue to work once GCS is back.
To make it close to the real-world case, the docker is used for isolation:
It starts a head node (0 cpus) and a worker node
It tried the basic function and make sure it's working
It kills GCS and make sure everything is working.
It starts GCS and make sure reconfig continues to work.
This is the basic cases for serve HA. We'll add more once we get better integrations.
This PR
- enables piping of autoscaler events to the driver's stdout with KubeRay
- cleans up the autoscaler's startup sequence
- removes some redis references
Turns out https://github.com/ray-project/ray/pull/25342 wasn't the root cause of the ray shutdown flakiness. I realized there's another PR that could affect this test suite. Let's try reverting it and see if things get better.
This PR allows the user to override the global default for max_retries for non-actor tasks. It adds an OS env called RAY_task_max_retries which can be passed to the driver or set with runtime envs. Any future tasks submitted by that worker will default to this value instead of 3, the hard-coded default.
It would be nicer if we could have a standard way of setting these defaults, but I think this is fine as a one-off for now (not a clear need for overriding defaults of other @ray.remote options yet).
Related issue number
Closes#24854.
Adds support for Python generators instead of just normal return functions when a task has multiple return values. This will allow developers to cut down on total memory usage for tasks, as they can free previous return values before allocating the next one on the heap.
The semantics for num_returns are about the same as usual tasks - the function will throw an error if the number of values returned by the generator does not match the number of return values specified by the user. The one difference is that if num_returns=1, the task will throw the usual Python exception that the generator cannot be pickled.
As an example, this feature will allow us to reduce memory usage in Datasets shuffle operations (see #25200 for a prototype).
This PR attempts to improve `BatchPredictor` performance with directory checkpoints by avoiding unnecessary filesystem operations.
In order to achieve that, the `Checkpoint` class is changed to always use a canonical path for the temporary directory if the Checkpoint has been created form an object ref. The directory is filelocked to prevent concurrent writes.
Tests have been addded.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Why are these changes needed?
Today, Ray scheduler always pick a random node if the resource requirement is empty, regardless of scheduling policy/strategy.
However, for node affinity scheduling policy, we should not pick random policy but try to stick to the node affinity constraints.
Newly pushed actors will never be used with existing pending submits, so the worker will not be used to speed up existing tasks. If _return_actor is called at the end of push instead, the actor is pushed to _idle_actors and immediately used if there are pending submits.
Move initialization for `callback.results_preprocessor` property to `callback.start_training()` method which only be called once while training start, currently initialization is triggered per message.
This fixes AttributeError: 'list' object has no attribute 'schema' when read fusion is flag disabled and pipelines are windowed by bytes.
Broken out from https://github.com/ray-project/ray/pull/25167/files
NOTE: This is not the official API improvement. But this will help dogfooding the feature before finalizing the output.
This PR improves the output state/metadata of existing state APIs.
Ray sometimes stores errors as the object value in shared memory. These objects have no data since the error is stored in the metadata field. #25085 describes a bug where these objects fail to spill because the IO worker assumes that the data field must be non-empty. This would cause head-of-line blocking for any other objects to spill and cause the whole job to hang. This PR fixes the issue by spilling these objects anyway.
Related issue number
Closes#25085.
If you pass a multidimensional input to `TorchPredictor.predict`, AIR errors. For more information about the error, see #25194.
Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
The tests in `test_torch_predictor.py` weren't in running CI. Also `test_torch_predictor.py::test_init` was failing.
Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
This PR adds timeout and asyncio for internal KV. This only applies to gcs_utils and not ray clients for now since this is purely for ray internal usage.
Redo for PR #24698:
This fixes two bugs in data locality:
When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
The locality policy did not take into account spilled locations.
Added C++ unit tests and Python tests for the above.
Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
This fixes two bugs in Datasets push-based shuffle:
Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
We were only reporting partial stats for the merge stage.
Related issue number
Issue 1 is necessary for performance at large-scale (#24480).