Commit graph

13079 commits

Author SHA1 Message Date
Clark Zinzow
50d47486f2
[Datasets] Add file-extension-based path filter for file-based datasources. (#24822)
This PR adds a format-based file extension path filter for file-based datasources, and sets it as the default path filter. This will allow users to point the read_{format}() API at directories containing a mixture of files, and ensure that only files of the appropriate type are read. This default filter can still be disabled via ray.data.read_csv(..., partition_filter=None).
2022-06-21 11:06:21 -07:00
sychen52
5c58d43df2
[docs][minor] Change one of the because to therefore. (#25921) 2022-06-21 10:41:40 -05:00
matthewdeng
fe4185974a
[docs] fix swapped pattern docs (#25948)
Content of the two docs were switched.

Unnecessary Ray Get images were correctly in `unnecessary-ray-get.rst`, which made this noticeable beyond the URL.
2022-06-21 10:37:37 -05:00
Tomasz Wrona
7b8ea81f18
[Tune] W&B logging - handle tuples in configs (#24102)
This allows correct logging of tuple entries in configs, e.g. PolicySpec (which is a namedtuple) from multiagent.policies key. Without this, the whole PolicySpec is serialized as a string, which doesn't allow to filter run by specific key from this tuple.
2022-06-21 16:15:55 +01:00
Artur Niederfahrenhorst
dcbc225728
[RLlib] Fix DDPG test ignoring framework_iterator-modified config. (#25913) 2022-06-21 16:17:42 +02:00
Avnish Narayan
d859b84058
[RLlib] Add compute log likelihoods test for CRR. (#25905) 2022-06-21 16:06:10 +02:00
Rohan Potdar
28df3f34f5
[RLlib]: Off-Policy Evaluation fixes. (#25899) 2022-06-21 13:24:24 +02:00
Artur Niederfahrenhorst
e10876604d
[RLlib] Include SampleBatch.T column in all collected batches. (#25926) 2022-06-21 13:20:22 +02:00
Guyang Song
d1d5fe61c2
[Dashboard][Frontend] Worker table enhancement (#25934) 2022-06-21 14:09:48 +08:00
SangBin Cho
411b1d8d2d
[State Observability] Return list instead of dict (#25888)
I’d like to propose a bit changes to the API. Currently we are returning the dict of ID -> value mapping when the list API is returned. But I am thinking to change this to a list because the sort will become ineffective if we return the dictionary. So, it’s ideal we use the list to keep the order (it’s important for deterministic order)

Also, for some APIs, each entry doesn’t have a unique id. For example, list objects will have duplicated object IDs from their entries, which is not working with dict return type (e.g., there can be more than 1 Object ID entry if the object is locally referenced & borrowed by task/pinned in memory)
Also, users can easily build dict index on their own if it is necessary.
2022-06-20 22:49:29 -07:00
Richard Liaw
fa1c6510f7
[hotfix] Revert "Exclude Bazel build files from Ray wheels (#25679)" (#25950)
Nightly wheels are stuck at 736c7b13c4.
2022-06-20 20:59:48 -07:00
Philipp Moritz
c604bc23c7
[Docs] Fix documentation building instructions (#25942)
It is often a bit challenging to get the full documentation to build (there are external packages that can make this challenging). This changes the instructions to treat warnings as warnings and not errors, which should improve the workflow.

`make develop` is the same as `make html` except it doesn't treat warnings as errors.
2022-06-20 18:04:25 -07:00
Myeongju Kim
a1a78077ca
Fix a broken link in Ray Dataset doc (#25927)
Co-authored-by: Myeong Kim <myeongki@amazon.com>
2022-06-20 13:17:46 -07:00
Sven Mika
1499af945b
[RLlib] Algorithm step() fixes: evaluation should NOT be part of timed training_step loop. (#25924) 2022-06-20 19:53:47 +02:00
matthewdeng
0ddc9d7213
[tune/air] catch pyarrow 8.0.0 error (#25900)
pyarrow 8.0.0 raises ArrowNotImplementedError instead of pyarrow.lib.ArrowInvalid for unrecognized URI.
2022-06-20 15:45:02 +01:00
Sven Mika
96693055bd
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869) 2022-06-20 15:54:00 +02:00
Guyang Song
e13cc4088a
[Dashboard] Don't sort node list by defult (#25884) 2022-06-20 11:35:12 +08:00
Stephanie Wang
3de4657cae
[datasets] Use generators for merge stage in push-based shuffle (#25907) 2022-06-18 16:33:54 -07:00
Chen Shen
97582a802d
[Core] update protobuf to 3.19.4 (#25648)
The error message in #25638 indicates we should use protobuf>3.19.0 to generated code so that we can work with python protobuf >= 4.21.1. Try generating wheels to see if this works.
2022-06-18 16:06:56 -07:00
Zhe Zhang
216aede789
Remove RL Summit announcement (#25354) 2022-06-18 16:01:23 -07:00
Peyton Murray
815dba542a
[data] Make ActorPoolStrategy kill pool of actors if exception is raised (#25803) 2022-06-17 23:57:58 -07:00
Yi Cheng
9fe3c815ec
[serve] Integrate GCS fault tolerance with ray serve. (#25637)
In this PR, we integrate GCS fault tolerance with ray serve. 

- Add timeout with 5s for kv.


Rollback should be added to all methods, which will come after.

Basic testing for KV timeout in serve and deploy is added.
2022-06-17 23:50:39 -07:00
Stephanie Wang
93aae48b80
[dataset] Pipeline task submission during reduce stage in push-based shuffle (#25795)
Reduce stage in push-based shuffle fails to complete at 100k output partitions or more. This is likely because of driver or raylet load from having too many tasks in flight at once.

We can fix this from ray core too, but for now, this PR adds pipelining for the reduce stage, to limit the total number of reduce tasks in flight at the same time. This is currently set to 2 * available parallelism in the cluster. We have to pick which reduce tasks to submit carefully since these are pinned to specific nodes. The PR does this by assigning tasks round-robin according to the corresponding merge task (which get spread throughout the cluster).

In addition, this PR refactors the map, merge, and reduce stages to use a common pipelined iterator pattern, since they all have a similar pattern of submitting a round of tasks at a time, then waiting for a previous round to finish before submitting more.
Related issue number

Closes #25412.
2022-06-17 17:33:16 -07:00
Clark Zinzow
1701b923bc
[Datasets] [Tensor Story - 2/2] Add "numpy" batch format for batch mapping and batch consumption. (#24870)
This PR adds a NumPy "numpy" batch format for batch transformations and batch consumption that works with all block types. See #24811.
2022-06-17 16:01:02 -07:00
Archit Kulkarni
437f568445
Revert "[datasets] Use generators for merge stage in push-based shuffle (#25336)" (#25898)
This reverts commit d699351748.
2022-06-17 14:25:06 -07:00
xwjiang2010
97f42425da
[air] Consolidate Tune and Train report (#25558)
Consolidate tune/train report/checkpoint functionality by working with a unified Session interface.
The goal of this PR is to establish a solid Session and Session.report path. 
In favor of having less merging conflict (as other folks are doing the whole package renaming) and control the scope of this PR, I have intentionally left out some migration. More PRs to follow. Feel free to comment on the ideal final state. 


To give an idea of the final directory structure. This is a for 2-worker DP training.
```
├── TensorflowTrainer_ce44d_00000_0_2022-06-15_14-40-42
│   ├── checkpoint_000000
│   │   ├── _current_checkpoint_id.meta.pkl
│   │   ├── _preprocessor.meta.pkl
│   │   ├── _timestamp.meta.pkl
│   │   ├── assets
│   │   ├── keras_metadata.pb
│   │   ├── saved_model.pb
│   │   └── variables
│   │       ├── variables.data-00000-of-00001
│   │       └── variables.index
│   ├── events.out.tfevents.1655329242.xw
│   ├── params.json
│   ├── params.pkl
│   ├── progress.csv
│   ├── rank_0
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   ├── rank_1
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   └── result.json
├── basic-variant-state-2022-06-15_14-40-42.json
├── experiment_state-2022-06-15_14-40-42.json
├── trainable.pkl
└── tuner.pkl
```
Update:
1. Updated a few classes to be backward compatible - while legacy ray train deprecation is ongoing.
2. Marked all places in 1 using "# TODO(xwjiang): Legacy Ray Train trainer clean up!". So we can easily clean those up once Antoni's work is landed.
3. All CI and release tests are passing.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 13:49:01 -07:00
clarng
2b270fd9cb
apply isort uniformly for a subset of directories (#25824)
Simplify isort filters and move it into isort cfg file.

With this change, isort will not longer apply to diffs other than to files that are in whitelisted directory (isort only supports blacklist so we implement that instead) This is much simpler than building our own whitelist logic since our formatter runs multiple codepaths depending on whether it is formatting a single file / PR / entire repo in CI.
2022-06-17 13:40:32 -07:00
Stephanie Wang
d699351748
[datasets] Use generators for merge stage in push-based shuffle (#25336)
This uses the generators introduced in #25247 to reduce memory usage during the merge stage in push-based shuffle. These tasks merge groups of map outputs, so it fits a generator pattern where we want to return merged outputs one at a time. Verified that this allows for merging more/larger objects at a time than the current list-based version.

I also tried this for the map stage in random_shuffle, but it didn't seem to make a difference in memory usage for Arrow blocks. I think this is probably because Arrow is already doing some zero-copy optimizations when selecting rows?

Also adds a new line to Dataset stats for memory usage. Unfortunately it's hard to get an accurate reading of physical memory usage in Python and this value will probably be an overestimate in a lot of cases. I didn't see a difference before and after this PR for the merge stage, for example. Arguably this field should be opt-in. For 100MB partitions, for example:
```
        Substage 0 read->random_shuffle_map: 10/10 blocks executed
        * Remote wall time: 1.44s min, 3.32s max, 2.57s mean, 25.74s total
        * Remote cpu time: 1.42s min, 2.53s max, 2.03s mean, 20.25s total
        * Worker memory usage (MB): 462 min, 864 max, 552 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used

        Substage 1 random_shuffle_reduce: 10/10 blocks executed
        * Remote wall time: 1.47s min, 2.94s max, 2.17s mean, 21.69s total
        * Remote cpu time: 1.45s min, 1.88s max, 1.71s mean, 17.09s total
        * Worker memory usage (MB): 462 min, 1047 max, 831 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used
```


## Checks

- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 12:29:24 -07:00
Stephanie Wang
293c122302
[dataset] Use polars for sorting (#25454) 2022-06-17 12:26:46 -07:00
Clark Zinzow
c2ab73fc40
[Datasets] Add ray_remote_args to read_text. (#23764) 2022-06-17 12:24:11 -07:00
Archit Kulkarni
85be093a84
[runtime env] Make all plugins return a List of URIs (#25825)
Followup from #24622.  This is another step towards pluggability for runtime_env.  Previously some plugin classes had `get_uri` which returned a single URI, while others had `get_uris` which returned a list.  This PR makes all plugins use `get_uris`, which simplifies the code overall.

Most of the lines in the diff just come from the new `format.sh` which sorts the imports.
2022-06-17 14:13:44 -05:00
Sven Mika
d90c6cfbd6
[RLlib] SimpleQ PolicyV2 (sub-classing). (#25871) 2022-06-17 20:12:16 +02:00
Simon Mo
438b6c78c8
[Release Tests] Add memory monitoring for Serve release test (#25868) 2022-06-17 11:11:56 -07:00
Stephanie Wang
09857907b7
[data] Fix bug in computing merge partitions in push-based shuffle (#25865)
Fixes a bug in push-based shuffle in computing the merge task <> reduce tasks mapping when the number of reduce tasks does not divide evenly by the number of merge tasks. Previously, if there were N reduce tasks for one merge task, we would do:
[N + 1, N + 1, ..., N + 1, all left over tasks]
which could lead to negative many reduce tasks n the last merge partition.

This PR changes it to:
[N + 1, N + 1, ..., N + 1, N, N, N, ...]
Related issue number

Closes #25863.
2022-06-17 10:19:00 -07:00
Alex Wu
187c21ce20
[gcs] Preserve job driver info for dashboard (#25880)
This PR ensures that GCS keeps the IP and PID information about a job so that it can be used to find the job's logs in the dashboard after the job terminates.

@alanwguo will handle any dashboard work in a separate PR.

Co-authored-by: Alex Wu <alex@anyscale.com>
2022-06-17 09:03:20 -07:00
Archit Kulkarni
b24c736bb8
[Doc] [runtime env] Add note that excludes paths are relative to working_dir (#25874)
Users' intuition might lead them to fill out `excludes` with absolute paths, e.g. `/Users/working_dir/subdir/`.  However, the `excludes` field uses `gitignore` syntax.  In `gitignore` syntax, paths that start with `/` are interpreted relative to the level of the directory where the `gitignore` file resides, and in our case this is the `working_dir` directory (morally speaking, since there's no actual `.gitignore` file.)  So the correct thing to put in `excludes` would be `/subdir/`.  As long as we support `gitignore` syntax, we should have a note in the docs for this.  This PR adds the note.
2022-06-17 10:50:04 -05:00
sychen52
edf16b8e2c
[docs] Edit the output of the script to match the code (#25855) 2022-06-17 10:48:28 -05:00
Fabian Witter
fcdf710574
[RLlib] Move offline input into replay buffer using rollout ops in CQL. (#25629) 2022-06-17 17:08:55 +02:00
matthewdeng
5c6b91d375
[Release] fix Horovod release tests (#25873)
Error message suggests:

Wait timeout after 30 seconds for key(s): 0. You may want to increase the timeout via HOROVOD_GLOO_TIMEOUT_SECONDS

Bumped up to 120 seconds.

Tests run successfully: https://buildkite.com/ray-project/release-tests-pr/builds/6906
2022-06-17 14:52:54 +01:00
Artur Niederfahrenhorst
a322cc5765
[RLlib] IMPALA/APPO multi-agent mix-in-buffer fixes (plus MA learning tests). (#25848) 2022-06-17 14:10:36 +02:00
Simon Mo
1c27469b6d
[macOS] Only cleanup directory after upload (#25835)
Missed it in previous enablement of uploading bazel log, we should no longer clean the directory anymore.
2022-06-17 12:46:37 +01:00
Kai Fricke
40a9fdcb0f
[tune/air] Fix checkpoint conversion for objects (#25885)
Converting Tracked memory checkpoints was faulty and untested.
2022-06-17 10:41:52 +01:00
Artur Niederfahrenhorst
e5740946b8
[RLlib] Fixes logging of all of RLlib's Algorithm names as warning messages. (#25840) 2022-06-17 08:41:18 +02:00
Avnish Narayan
393cf4d8f7
[RLlib] Fix action_sampler_fn call in TorchPolicyV2 (obs_batch instead of input_dict arg). (#25877) 2022-06-17 08:39:39 +02:00
Siyuan (Ryans) Zhuang
fea8dd08fc
[workflow] Enhance dataset tests (#25876) 2022-06-16 22:50:31 -07:00
sychen52
ce02ac0311
[docs] Fix example actor indentation (#25882) 2022-06-16 22:06:21 -07:00
yuduber
26b2faf869
[data] add retry logic to ray.data parquet file reading (#25673) 2022-06-16 21:49:41 -07:00
Guyang Song
974bbc0f43
[C++ worker] move xlang test to separate test file (#25756) 2022-06-17 11:05:24 +08:00
Jiao
f6735f90c7
[Ray DAG] Move dag project folder out of experimental (#25532) 2022-06-16 19:15:39 -07:00
Clark Zinzow
e111b173e9
[Datasets] Workaround for unserializable Arrow JSON ReadOptions. (#25821)
pyarrow.json.ReadOptions are not picklable until Arrow 8.0.0, which we do not yet support. This PR adds a custom serializer for this type and ensures that said serializer is registered before each Ray task submission.
2022-06-16 18:33:59 -07:00