Followup PR to https://github.com/ray-project/ray/pull/20273.
- Hides cache logic behind a class.
- Adds "name" field to runtime env plugin class and makes existing conda, pip, working_dir, and py_modules inherit from the plugin class.
Future work will unify the codepath for these "base plugins" with the codepath for third-party plugins; currently these are different, and URI support is missing for third-party plugins.
This is a follow-up to the previous PR (GitHub did some funky things when I did a rebase, so I had to create a new one)
On Windows systems, the `exec_worker` method may fail due to spaces being present in arguments that are file paths. This addresses said issue.
Unfortunately, ray.data.read_parquet() doesn't work with multiple directories since it uses Arrow's Dataset abstraction under-the-hood, which doesn't accept multiple directories as a source: https://arrow.apache.org/docs/python/generated/pyarrow.dataset.dataset.html
This PR makes this clear in the docs, and as a driveby, adds ray.data.read_parquet_bulk() to the API docs.
Push-based shuffle has some extra metadata involving merge and reduce tasks. Previously we were serializing an O(n) (n = reduce tasks) metadata and sending this to tasks, which caused a lot of unnecessary plasma usage on the head node. This PR splits up the metadata into parts that can be kept on the driver and a relatively cheap part that is sent to all tasks.
Related issue number
One of the issues needed for #24480.
Adds a _transform_arrow method to Preprocessors that allows them to implement logic for arrow-based Datasets.
- If only _transform_arrow is implemented, will convert the data to arrow.
- If only _transform_pandas is implemented, will convert the data to pandas.
- If both are implemented, will pick the method corresponding to the format for best performance.
Implementation is defined as overriding the method in a sub-class.
This is only a change to the base Preprocessor class. Implementations for sub-classes will come in the future.
this is a temp fix of #25556. When the dtype from the pandas dataframe gives object, we set the dtype to be None and make use of the auto-inferring of the type in the conversion.
Why are these changes needed?
Wasn't able to build Ray following these instructions : https://docs.ray.io/en/latest/ray-contribute/development.html#building-ray
It fails to run
pip install -e . --verbose # Add --user if you see a permission denied error.
I have a local installation of protobuf via Homebrew and bazel is using its headers against the protobuf that it is pulling into its sandbox. It is a known issue with bazel and one of the workarounds is to block the local dir so it doesn't accidentally pick up the header if someone happens to have it installed locally.
Manually tested by running
bazel build --verbose_failures --sandbox_debug -- //:ray_pkg
Without the fix I would get an error similar to https://gist.github.com/clarng/ff7b7bf7672802d1e3e07b4b509e4fc8
With the fix it builds
This PR includes / depends on #25709
The two concepts of Syncer and SyncClient are confusing, as is the current API for passing custom sync functions.
This PR refactors Tune's syncing behavior. The Sync client concept is hard deprecated. Instead, we offer a well defined Syncer API that can be extended to provide own syncing functionality. However, the default will be to use Ray AIRs file transfer utilities.
New API:
- Users can pass `syncer=CustomSyncer` which implements the `Syncer` API
- Otherwise our off-the-shelf syncing is used
- As before, syncing to cloud disables syncing to driver
Changes:
- Sync client is removed
- Syncer interface introduced
- _DefaultSyncer is a wrapper around the URI upload/download API from Ray AIR
- SyncerCallback only uses remote tasks to synchronize data
- Rsync syncing is fully depracated and removed
- Docker and kubernetes-specific syncing is fully deprecated and removed
- Testing is improved to use `file://` URIs instead of mock sync clients
## Why are these changes needed?
This is to refactor the interaction of state cli to API server from a hard-coded request workflow to `SubmissionClient` based.
See #24956 for more details.
## Summary
<!-- Please give a short summary of the change and the problem this solves. -->
- Created a `StateApiClient` that inherits from the `SubmissionClient` and refactor various listing commands into class methods.
## Related issue number
Closes#24956Closes#25578
Adds notes explaining that Ray's support on Azure, Aliyun, and SLURM is community-maintained.
Rephrases the mention of K8s support in the intro.
This PR replaces https://github.com/ray-project/ray/pull/25504.
## Why are these changes needed?
When schedule actors on pg, instead of iterating all nodes in the cluster resource, This optimize will directly queries corresponding nodes by looking at pg location index.
This optimization can reduce the complexity of the algorithm from O (N) to o (1),and N is the number of nodes. In particular, the more nodes in large-scale clusters, the better the optimization effect.
**This PR only optimize schedule by gcs, I will submit a PR for raylet scheduling later.**
In ant group, Now we have achieved the optimization in the GCS scheduling mode and obtained the following performance test results.
1、The average time of selecting nodes is reduced from 330us to 30us, and the performance is improved by about 11 times.
2、The total time of creating & executing 12,000 actors ranges from 271 (s) - > 225 (s) on average. Reduce time consumption by 17%.
More detailed solution information is in the issue.
## Related issue number
[Core/PG/Schedule]Optimize the scheduling performance of actors/tasks with PG specified #23881
This is carved out from https://github.com/ray-project/ray/pull/25558.
tlrd: checkpoint.py current doesn't support the following
```
a. from fs to dict checkpoint;
b. drop some marker to dict checkpoint;
c. convert back to fs checkpoint;
d. convert back to dict checkpoint.
Assert that the marker should still be there
```
It will be easier to develop if we could use a tool to organize / sort imports and not have to move them around by hand.
This PR shows how we could do this with isort (black doesn't quite do this per https://github.com/psf/black/issues/333)
After this PR lands everyone will need to update their formatter to include isort if they don't have it already, i.e.
pip install -r ./python/requirements_linters.txt
All future file changes will go through isort and may introduce a slightly larger PR the first time as it will clean up the imports.
The plan is to land this PR and also clean up the rest of the code in parallel by using this PR to format the codebase (so people won't get surprised by the formatter if the file hasn't been touched yet)
Co-authored-by: Clarence Ng <clarence@anyscale.com>