Content of the two docs were switched.
Unnecessary Ray Get images were correctly in `unnecessary-ray-get.rst`, which made this noticeable beyond the URL.
This allows correct logging of tuple entries in configs, e.g. PolicySpec (which is a namedtuple) from multiagent.policies key. Without this, the whole PolicySpec is serialized as a string, which doesn't allow to filter run by specific key from this tuple.
I’d like to propose a bit changes to the API. Currently we are returning the dict of ID -> value mapping when the list API is returned. But I am thinking to change this to a list because the sort will become ineffective if we return the dictionary. So, it’s ideal we use the list to keep the order (it’s important for deterministic order)
Also, for some APIs, each entry doesn’t have a unique id. For example, list objects will have duplicated object IDs from their entries, which is not working with dict return type (e.g., there can be more than 1 Object ID entry if the object is locally referenced & borrowed by task/pinned in memory)
Also, users can easily build dict index on their own if it is necessary.
It is often a bit challenging to get the full documentation to build (there are external packages that can make this challenging). This changes the instructions to treat warnings as warnings and not errors, which should improve the workflow.
`make develop` is the same as `make html` except it doesn't treat warnings as errors.
The error message in #25638 indicates we should use protobuf>3.19.0 to generated code so that we can work with python protobuf >= 4.21.1. Try generating wheels to see if this works.
In this PR, we integrate GCS fault tolerance with ray serve.
- Add timeout with 5s for kv.
Rollback should be added to all methods, which will come after.
Basic testing for KV timeout in serve and deploy is added.
Reduce stage in push-based shuffle fails to complete at 100k output partitions or more. This is likely because of driver or raylet load from having too many tasks in flight at once.
We can fix this from ray core too, but for now, this PR adds pipelining for the reduce stage, to limit the total number of reduce tasks in flight at the same time. This is currently set to 2 * available parallelism in the cluster. We have to pick which reduce tasks to submit carefully since these are pinned to specific nodes. The PR does this by assigning tasks round-robin according to the corresponding merge task (which get spread throughout the cluster).
In addition, this PR refactors the map, merge, and reduce stages to use a common pipelined iterator pattern, since they all have a similar pattern of submitting a round of tasks at a time, then waiting for a previous round to finish before submitting more.
Related issue number
Closes#25412.
Consolidate tune/train report/checkpoint functionality by working with a unified Session interface.
The goal of this PR is to establish a solid Session and Session.report path.
In favor of having less merging conflict (as other folks are doing the whole package renaming) and control the scope of this PR, I have intentionally left out some migration. More PRs to follow. Feel free to comment on the ideal final state.
To give an idea of the final directory structure. This is a for 2-worker DP training.
```
├── TensorflowTrainer_ce44d_00000_0_2022-06-15_14-40-42
│ ├── checkpoint_000000
│ │ ├── _current_checkpoint_id.meta.pkl
│ │ ├── _preprocessor.meta.pkl
│ │ ├── _timestamp.meta.pkl
│ │ ├── assets
│ │ ├── keras_metadata.pb
│ │ ├── saved_model.pb
│ │ └── variables
│ │ ├── variables.data-00000-of-00001
│ │ └── variables.index
│ ├── events.out.tfevents.1655329242.xw
│ ├── params.json
│ ├── params.pkl
│ ├── progress.csv
│ ├── rank_0
│ │ └── my_model
│ │ ├── assets
│ │ ├── keras_metadata.pb
│ │ ├── saved_model.pb
│ │ └── variables
│ │ ├── variables.data-00000-of-00001
│ │ └── variables.index
│ ├── rank_1
│ │ └── my_model
│ │ ├── assets
│ │ ├── keras_metadata.pb
│ │ ├── saved_model.pb
│ │ └── variables
│ │ ├── variables.data-00000-of-00001
│ │ └── variables.index
│ └── result.json
├── basic-variant-state-2022-06-15_14-40-42.json
├── experiment_state-2022-06-15_14-40-42.json
├── trainable.pkl
└── tuner.pkl
```
Update:
1. Updated a few classes to be backward compatible - while legacy ray train deprecation is ongoing.
2. Marked all places in 1 using "# TODO(xwjiang): Legacy Ray Train trainer clean up!". So we can easily clean those up once Antoni's work is landed.
3. All CI and release tests are passing.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Simplify isort filters and move it into isort cfg file.
With this change, isort will not longer apply to diffs other than to files that are in whitelisted directory (isort only supports blacklist so we implement that instead) This is much simpler than building our own whitelist logic since our formatter runs multiple codepaths depending on whether it is formatting a single file / PR / entire repo in CI.
This uses the generators introduced in #25247 to reduce memory usage during the merge stage in push-based shuffle. These tasks merge groups of map outputs, so it fits a generator pattern where we want to return merged outputs one at a time. Verified that this allows for merging more/larger objects at a time than the current list-based version.
I also tried this for the map stage in random_shuffle, but it didn't seem to make a difference in memory usage for Arrow blocks. I think this is probably because Arrow is already doing some zero-copy optimizations when selecting rows?
Also adds a new line to Dataset stats for memory usage. Unfortunately it's hard to get an accurate reading of physical memory usage in Python and this value will probably be an overestimate in a lot of cases. I didn't see a difference before and after this PR for the merge stage, for example. Arguably this field should be opt-in. For 100MB partitions, for example:
```
Substage 0 read->random_shuffle_map: 10/10 blocks executed
* Remote wall time: 1.44s min, 3.32s max, 2.57s mean, 25.74s total
* Remote cpu time: 1.42s min, 2.53s max, 2.03s mean, 20.25s total
* Worker memory usage (MB): 462 min, 864 max, 552 mean
* Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
* Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
* Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used
Substage 1 random_shuffle_reduce: 10/10 blocks executed
* Remote wall time: 1.47s min, 2.94s max, 2.17s mean, 21.69s total
* Remote cpu time: 1.45s min, 1.88s max, 1.71s mean, 17.09s total
* Worker memory usage (MB): 462 min, 1047 max, 831 mean
* Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
* Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
* Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used
```
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Followup from #24622. This is another step towards pluggability for runtime_env. Previously some plugin classes had `get_uri` which returned a single URI, while others had `get_uris` which returned a list. This PR makes all plugins use `get_uris`, which simplifies the code overall.
Most of the lines in the diff just come from the new `format.sh` which sorts the imports.
Fixes a bug in push-based shuffle in computing the merge task <> reduce tasks mapping when the number of reduce tasks does not divide evenly by the number of merge tasks. Previously, if there were N reduce tasks for one merge task, we would do:
[N + 1, N + 1, ..., N + 1, all left over tasks]
which could lead to negative many reduce tasks n the last merge partition.
This PR changes it to:
[N + 1, N + 1, ..., N + 1, N, N, N, ...]
Related issue number
Closes#25863.
This PR ensures that GCS keeps the IP and PID information about a job so that it can be used to find the job's logs in the dashboard after the job terminates.
@alanwguo will handle any dashboard work in a separate PR.
Co-authored-by: Alex Wu <alex@anyscale.com>
Users' intuition might lead them to fill out `excludes` with absolute paths, e.g. `/Users/working_dir/subdir/`. However, the `excludes` field uses `gitignore` syntax. In `gitignore` syntax, paths that start with `/` are interpreted relative to the level of the directory where the `gitignore` file resides, and in our case this is the `working_dir` directory (morally speaking, since there's no actual `.gitignore` file.) So the correct thing to put in `excludes` would be `/subdir/`. As long as we support `gitignore` syntax, we should have a note in the docs for this. This PR adds the note.
Error message suggests:
Wait timeout after 30 seconds for key(s): 0. You may want to increase the timeout via HOROVOD_GLOO_TIMEOUT_SECONDS
Bumped up to 120 seconds.
Tests run successfully: https://buildkite.com/ray-project/release-tests-pr/builds/6906
pyarrow.json.ReadOptions are not picklable until Arrow 8.0.0, which we do not yet support. This PR adds a custom serializer for this type and ensures that said serializer is registered before each Ray task submission.
By setting the [Sphinx `default_role`](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-default_role) to [`py:obj`](https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#role-py-obj), we can concisely cross-reference other Python APIs (classes or functions) in API docstrings while still maintaining the editor/IDE/terminal readability of the docstrings.
Before this PR, when referencing a class or a function, the relevant role specification is required: :class:`Dataset`, :meth:`Dataset.map`, :func:`.read_parquet`.
After this PR, the raw cross reference will work in most cases: `Dataset`, `Dataset.map`, `read_parquet`.
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(