When GCS restarts, it'll recover the placement group and make sure no resource is leaking. The protocol now is like:
- Sending the committed PGs to raylets
- Raylets will check whether any worker is using resources from the PG not in this group
- If there is any, it'll kill that worker.
Right now there is a bug, which will kill the worker using bundle index equals -1.
Un-reverting https://github.com/ray-project/ray/pull/24934 which caused `test_cluster` to become flaky. This was due to an oversight: we need to update the `HTTPState` logic to account for the controller not necessarily running on the head node.
This will require using the new `SchedulingPolicy` API, but I'm not quite sure the best way to do it. Context here: https://github.com/ray-project/ray/issues/25090.
Followup PR to https://github.com/ray-project/ray/pull/20273.
- Hides cache logic behind a class.
- Adds "name" field to runtime env plugin class and makes existing conda, pip, working_dir, and py_modules inherit from the plugin class.
Future work will unify the codepath for these "base plugins" with the codepath for third-party plugins; currently these are different, and URI support is missing for third-party plugins.
This is a follow-up to the previous PR (GitHub did some funky things when I did a rebase, so I had to create a new one)
On Windows systems, the `exec_worker` method may fail due to spaces being present in arguments that are file paths. This addresses said issue.
Unfortunately, ray.data.read_parquet() doesn't work with multiple directories since it uses Arrow's Dataset abstraction under-the-hood, which doesn't accept multiple directories as a source: https://arrow.apache.org/docs/python/generated/pyarrow.dataset.dataset.html
This PR makes this clear in the docs, and as a driveby, adds ray.data.read_parquet_bulk() to the API docs.
Push-based shuffle has some extra metadata involving merge and reduce tasks. Previously we were serializing an O(n) (n = reduce tasks) metadata and sending this to tasks, which caused a lot of unnecessary plasma usage on the head node. This PR splits up the metadata into parts that can be kept on the driver and a relatively cheap part that is sent to all tasks.
Related issue number
One of the issues needed for #24480.
Adds a _transform_arrow method to Preprocessors that allows them to implement logic for arrow-based Datasets.
- If only _transform_arrow is implemented, will convert the data to arrow.
- If only _transform_pandas is implemented, will convert the data to pandas.
- If both are implemented, will pick the method corresponding to the format for best performance.
Implementation is defined as overriding the method in a sub-class.
This is only a change to the base Preprocessor class. Implementations for sub-classes will come in the future.
this is a temp fix of #25556. When the dtype from the pandas dataframe gives object, we set the dtype to be None and make use of the auto-inferring of the type in the conversion.
This PR includes / depends on #25709
The two concepts of Syncer and SyncClient are confusing, as is the current API for passing custom sync functions.
This PR refactors Tune's syncing behavior. The Sync client concept is hard deprecated. Instead, we offer a well defined Syncer API that can be extended to provide own syncing functionality. However, the default will be to use Ray AIRs file transfer utilities.
New API:
- Users can pass `syncer=CustomSyncer` which implements the `Syncer` API
- Otherwise our off-the-shelf syncing is used
- As before, syncing to cloud disables syncing to driver
Changes:
- Sync client is removed
- Syncer interface introduced
- _DefaultSyncer is a wrapper around the URI upload/download API from Ray AIR
- SyncerCallback only uses remote tasks to synchronize data
- Rsync syncing is fully depracated and removed
- Docker and kubernetes-specific syncing is fully deprecated and removed
- Testing is improved to use `file://` URIs instead of mock sync clients
## Why are these changes needed?
This is to refactor the interaction of state cli to API server from a hard-coded request workflow to `SubmissionClient` based.
See #24956 for more details.
## Summary
<!-- Please give a short summary of the change and the problem this solves. -->
- Created a `StateApiClient` that inherits from the `SubmissionClient` and refactor various listing commands into class methods.
## Related issue number
Closes#24956Closes#25578
## Why are these changes needed?
When schedule actors on pg, instead of iterating all nodes in the cluster resource, This optimize will directly queries corresponding nodes by looking at pg location index.
This optimization can reduce the complexity of the algorithm from O (N) to o (1),and N is the number of nodes. In particular, the more nodes in large-scale clusters, the better the optimization effect.
**This PR only optimize schedule by gcs, I will submit a PR for raylet scheduling later.**
In ant group, Now we have achieved the optimization in the GCS scheduling mode and obtained the following performance test results.
1、The average time of selecting nodes is reduced from 330us to 30us, and the performance is improved by about 11 times.
2、The total time of creating & executing 12,000 actors ranges from 271 (s) - > 225 (s) on average. Reduce time consumption by 17%.
More detailed solution information is in the issue.
## Related issue number
[Core/PG/Schedule]Optimize the scheduling performance of actors/tasks with PG specified #23881
This is carved out from https://github.com/ray-project/ray/pull/25558.
tlrd: checkpoint.py current doesn't support the following
```
a. from fs to dict checkpoint;
b. drop some marker to dict checkpoint;
c. convert back to fs checkpoint;
d. convert back to dict checkpoint.
Assert that the marker should still be there
```
It will be easier to develop if we could use a tool to organize / sort imports and not have to move them around by hand.
This PR shows how we could do this with isort (black doesn't quite do this per https://github.com/psf/black/issues/333)
After this PR lands everyone will need to update their formatter to include isort if they don't have it already, i.e.
pip install -r ./python/requirements_linters.txt
All future file changes will go through isort and may introduce a slightly larger PR the first time as it will clean up the imports.
The plan is to land this PR and also clean up the rest of the code in parallel by using this PR to format the codebase (so people won't get surprised by the formatter if the file hasn't been touched yet)
Co-authored-by: Clarence Ng <clarence@anyscale.com>
Follow another approach mentioned in #25350.
The scaling config is now converted to the dataclass letting us use a single function for validation of both user supplied dicts and dataclasses. This PR also fixes the fact the scaling config wasn't validated in the GBDT Trainer and validates that allowed keys set in Trainers are present in the dataclass.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
The multi node testing utility currently does not support controlling cluster state from within Ray tasks or actors., but it currently requires Ray client. This makes it impossible to properly test e.g. fault tolerance, as the driver has to be executed on the client machine in order to control cluster state. However, this client machine is not part of the Ray cluster and can't schedule tasks on the local node - which is required by some utilities, e.g. checkpoint to driver syncing.
This PR introduces a remote control API for the multi node cluster utility that utilizes a Ray queue to communicate with an execution thread. That way we can instruct cluster commands from within the Ray cluster.
Closes#25588
NVIDIA recently pushed updates to the CUDA image removing support for end of life drivers. Therefore, the default AMIs that we previously had for OSS cluster launcher are not able to run the Ray GPU Docker images.
This PR updates the default AMIs to the latest Deep Learning versions. In general, we should periodically update these AMIs, especially when we add support for new CUDA versions.
I manually confirmed that the nightly Ray docker images work with the new AMI in us-west-2.
This PR implements the basic log APIs. For the better APIs (like higher level APIs like ray logs actors), it will be implemented after the internal API review is done.
# If there's only 1 match, print a file content. Otherwise, print all files that match glob.
ray logs [glob_filter] --node-id=[head node by default]
Args:
--tail: Tail the last X lines
--follow: Follow the new logs
--actor-id: The actor id
--pid --node-ip: For worker logs
--node-id: The node id of the log
--interval: When --follow is specified, logs are printed with this interval. (should we remove it?)
Including the Bazel build files in the wheel leads to problems if the Ray wheels are brought in as a dependency from another bazel workspace, since that workspace will not recurse into the directories of the wheel that contain BUILD files -- this can lead to dropped files.
This only happens for macOS wheels, on linux wheels the BUILD files were already excluded.
Timeout is only introduced in GcsClient due to the reason that ray client is not defining the timeout well for their API and it's a lot of effort to make it work e2e. For built-in component, we should use GcsClient directly.
This PR use GcsClient to replace the old one to integrate GCS HA with Ray Serve.