Commit graph

13301 commits

Author SHA1 Message Date
Yi Cheng
a1f02f68b7
[core][gcs] Make GCS client working with timeout_ms. (#25975)
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because  now, the request will be queued and never got a response. This PR fixed it.

For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
2022-06-22 18:02:29 -07:00
shrekris-anyscale
4d8a82bdf6
[Serve] Use "serve" namespace during controller recovery (#25987) 2022-06-22 16:08:07 -07:00
Sumanth Ratna
67140f2d26
Remove node.py and ray_constants.py links from setup-dev.py (#25997) 2022-06-22 15:45:29 -07:00
Kai Fricke
d65d4aff9a
[tune/structure] Move CLI files into subpackage (#26011)
As part of the Tune restructuring, move the CLI scripts and commands to a cli subpackage.
2022-06-22 23:05:26 +01:00
Kai Fricke
ecf0b93146
[tune/structure] Move AutoML board (#26012)
As part of the Tune restructuring, move the AutoML Board into the automl package.
2022-06-22 21:52:38 +01:00
Sihan Wang
c0cf9b8098
[Serve][Doc] Autoscaling (#25646)
- new section of doc for autoscaling (introduction of serve autoscaling and config parameter)
- Remove the version requirement note inside the doc

Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
2022-06-22 15:32:18 -05:00
sychen52
84401bb616
add missing brackets (#25992) 2022-06-22 15:30:55 -05:00
Chen Shen
afb092a03a
[Core] Out of Disk prevention (#25370)
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.

We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
2022-06-22 12:25:32 -07:00
Amog Kamsetty
d6e8b90236
[AIR] Update TorchPredictor to new Predictor API (#25536) 2022-06-22 09:49:07 -07:00
SangBin Cho
6552e096e6
[State Observability] Summary APIs (#25672)
Task/actor/object summary

Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
2022-06-22 06:21:50 -07:00
Sven Mika
3d6df50258
[RLlib] Fix get_num_samples_loaded_into_buffer in TorchPolicyV2. (#25956) 2022-06-22 13:11:41 +02:00
Sven Mika
464ac82207
[RLlib] Small docs fixes for evaluation + training. (#25957) 2022-06-22 13:11:18 +02:00
Avnish Narayan
871aef80dc
[RLlib] Aggregate Impala learner info. (#25856) 2022-06-22 09:43:10 +02:00
Guyang Song
a0fbd54753
[C++ worker] use dynamic library in C++ default_worker (#25720) 2022-06-22 15:11:15 +08:00
xwjiang2010
b4026f9971
[air] RunConfig.failure --> failure_config (#25967) 2022-06-21 16:51:26 -07:00
Kai Fricke
fb3dd0ea40
[release/1.13.0] Add release logs (#24509)
Preliminary release logs for review and approval.
2022-06-21 23:51:25 +01:00
Rina Ueno
a29eeaa1f6
[Workflows] Explain workflow_id and task_name in the docs (#25800) 2022-06-21 15:24:16 -07:00
Eric Liang
43aa2299e6
[api] Annotate as public / move ray-core APIs to _private and add enforcement rule (#25695)
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
2022-06-21 15:13:29 -07:00
Archit Kulkarni
565e366529
[runtime env] Use async internal kv in package download and plugins (#25788)
Uses the async KV API for downloading in the runtime env agent. This avoids the complexity of running the runtime env creation functions in a separate thread.

Some functions are still sync, including the working_dir/py_modules upload, installing wheels, and possibly others.
2022-06-21 15:02:36 -07:00
Antoni Baum
b7d4ae541d
[Train] Move load_checkpoint to utils (#25940)
Moves load_checkpoint methods from trainer files to util files for consistency and better modularity.
2022-06-21 13:03:56 -07:00
shrekris-anyscale
3d6a5450c9
[Serve] Stop Ray in test_serve_head.py fixture (#25893) 2022-06-21 11:28:07 -07:00
shrekris-anyscale
ad12f0cd02
[Serve] Deprecate outdated REST API settings (#25932) 2022-06-21 11:06:45 -07:00
Clark Zinzow
50d47486f2
[Datasets] Add file-extension-based path filter for file-based datasources. (#24822)
This PR adds a format-based file extension path filter for file-based datasources, and sets it as the default path filter. This will allow users to point the read_{format}() API at directories containing a mixture of files, and ensure that only files of the appropriate type are read. This default filter can still be disabled via ray.data.read_csv(..., partition_filter=None).
2022-06-21 11:06:21 -07:00
sychen52
5c58d43df2
[docs][minor] Change one of the because to therefore. (#25921) 2022-06-21 10:41:40 -05:00
matthewdeng
fe4185974a
[docs] fix swapped pattern docs (#25948)
Content of the two docs were switched.

Unnecessary Ray Get images were correctly in `unnecessary-ray-get.rst`, which made this noticeable beyond the URL.
2022-06-21 10:37:37 -05:00
Tomasz Wrona
7b8ea81f18
[Tune] W&B logging - handle tuples in configs (#24102)
This allows correct logging of tuple entries in configs, e.g. PolicySpec (which is a namedtuple) from multiagent.policies key. Without this, the whole PolicySpec is serialized as a string, which doesn't allow to filter run by specific key from this tuple.
2022-06-21 16:15:55 +01:00
Artur Niederfahrenhorst
dcbc225728
[RLlib] Fix DDPG test ignoring framework_iterator-modified config. (#25913) 2022-06-21 16:17:42 +02:00
Avnish Narayan
d859b84058
[RLlib] Add compute log likelihoods test for CRR. (#25905) 2022-06-21 16:06:10 +02:00
Rohan Potdar
28df3f34f5
[RLlib]: Off-Policy Evaluation fixes. (#25899) 2022-06-21 13:24:24 +02:00
Artur Niederfahrenhorst
e10876604d
[RLlib] Include SampleBatch.T column in all collected batches. (#25926) 2022-06-21 13:20:22 +02:00
Guyang Song
d1d5fe61c2
[Dashboard][Frontend] Worker table enhancement (#25934) 2022-06-21 14:09:48 +08:00
SangBin Cho
411b1d8d2d
[State Observability] Return list instead of dict (#25888)
I’d like to propose a bit changes to the API. Currently we are returning the dict of ID -> value mapping when the list API is returned. But I am thinking to change this to a list because the sort will become ineffective if we return the dictionary. So, it’s ideal we use the list to keep the order (it’s important for deterministic order)

Also, for some APIs, each entry doesn’t have a unique id. For example, list objects will have duplicated object IDs from their entries, which is not working with dict return type (e.g., there can be more than 1 Object ID entry if the object is locally referenced & borrowed by task/pinned in memory)
Also, users can easily build dict index on their own if it is necessary.
2022-06-20 22:49:29 -07:00
Richard Liaw
fa1c6510f7
[hotfix] Revert "Exclude Bazel build files from Ray wheels (#25679)" (#25950)
Nightly wheels are stuck at 736c7b13c4.
2022-06-20 20:59:48 -07:00
Philipp Moritz
c604bc23c7
[Docs] Fix documentation building instructions (#25942)
It is often a bit challenging to get the full documentation to build (there are external packages that can make this challenging). This changes the instructions to treat warnings as warnings and not errors, which should improve the workflow.

`make develop` is the same as `make html` except it doesn't treat warnings as errors.
2022-06-20 18:04:25 -07:00
Myeongju Kim
a1a78077ca
Fix a broken link in Ray Dataset doc (#25927)
Co-authored-by: Myeong Kim <myeongki@amazon.com>
2022-06-20 13:17:46 -07:00
Sven Mika
1499af945b
[RLlib] Algorithm step() fixes: evaluation should NOT be part of timed training_step loop. (#25924) 2022-06-20 19:53:47 +02:00
matthewdeng
0ddc9d7213
[tune/air] catch pyarrow 8.0.0 error (#25900)
pyarrow 8.0.0 raises ArrowNotImplementedError instead of pyarrow.lib.ArrowInvalid for unrecognized URI.
2022-06-20 15:45:02 +01:00
Sven Mika
96693055bd
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869) 2022-06-20 15:54:00 +02:00
Guyang Song
e13cc4088a
[Dashboard] Don't sort node list by defult (#25884) 2022-06-20 11:35:12 +08:00
Stephanie Wang
3de4657cae
[datasets] Use generators for merge stage in push-based shuffle (#25907) 2022-06-18 16:33:54 -07:00
Chen Shen
97582a802d
[Core] update protobuf to 3.19.4 (#25648)
The error message in #25638 indicates we should use protobuf>3.19.0 to generated code so that we can work with python protobuf >= 4.21.1. Try generating wheels to see if this works.
2022-06-18 16:06:56 -07:00
Zhe Zhang
216aede789
Remove RL Summit announcement (#25354) 2022-06-18 16:01:23 -07:00
Peyton Murray
815dba542a
[data] Make ActorPoolStrategy kill pool of actors if exception is raised (#25803) 2022-06-17 23:57:58 -07:00
Yi Cheng
9fe3c815ec
[serve] Integrate GCS fault tolerance with ray serve. (#25637)
In this PR, we integrate GCS fault tolerance with ray serve. 

- Add timeout with 5s for kv.


Rollback should be added to all methods, which will come after.

Basic testing for KV timeout in serve and deploy is added.
2022-06-17 23:50:39 -07:00
Stephanie Wang
93aae48b80
[dataset] Pipeline task submission during reduce stage in push-based shuffle (#25795)
Reduce stage in push-based shuffle fails to complete at 100k output partitions or more. This is likely because of driver or raylet load from having too many tasks in flight at once.

We can fix this from ray core too, but for now, this PR adds pipelining for the reduce stage, to limit the total number of reduce tasks in flight at the same time. This is currently set to 2 * available parallelism in the cluster. We have to pick which reduce tasks to submit carefully since these are pinned to specific nodes. The PR does this by assigning tasks round-robin according to the corresponding merge task (which get spread throughout the cluster).

In addition, this PR refactors the map, merge, and reduce stages to use a common pipelined iterator pattern, since they all have a similar pattern of submitting a round of tasks at a time, then waiting for a previous round to finish before submitting more.
Related issue number

Closes #25412.
2022-06-17 17:33:16 -07:00
Clark Zinzow
1701b923bc
[Datasets] [Tensor Story - 2/2] Add "numpy" batch format for batch mapping and batch consumption. (#24870)
This PR adds a NumPy "numpy" batch format for batch transformations and batch consumption that works with all block types. See #24811.
2022-06-17 16:01:02 -07:00
Archit Kulkarni
437f568445
Revert "[datasets] Use generators for merge stage in push-based shuffle (#25336)" (#25898)
This reverts commit d699351748.
2022-06-17 14:25:06 -07:00
xwjiang2010
97f42425da
[air] Consolidate Tune and Train report (#25558)
Consolidate tune/train report/checkpoint functionality by working with a unified Session interface.
The goal of this PR is to establish a solid Session and Session.report path. 
In favor of having less merging conflict (as other folks are doing the whole package renaming) and control the scope of this PR, I have intentionally left out some migration. More PRs to follow. Feel free to comment on the ideal final state. 


To give an idea of the final directory structure. This is a for 2-worker DP training.
```
├── TensorflowTrainer_ce44d_00000_0_2022-06-15_14-40-42
│   ├── checkpoint_000000
│   │   ├── _current_checkpoint_id.meta.pkl
│   │   ├── _preprocessor.meta.pkl
│   │   ├── _timestamp.meta.pkl
│   │   ├── assets
│   │   ├── keras_metadata.pb
│   │   ├── saved_model.pb
│   │   └── variables
│   │       ├── variables.data-00000-of-00001
│   │       └── variables.index
│   ├── events.out.tfevents.1655329242.xw
│   ├── params.json
│   ├── params.pkl
│   ├── progress.csv
│   ├── rank_0
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   ├── rank_1
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   └── result.json
├── basic-variant-state-2022-06-15_14-40-42.json
├── experiment_state-2022-06-15_14-40-42.json
├── trainable.pkl
└── tuner.pkl
```
Update:
1. Updated a few classes to be backward compatible - while legacy ray train deprecation is ongoing.
2. Marked all places in 1 using "# TODO(xwjiang): Legacy Ray Train trainer clean up!". So we can easily clean those up once Antoni's work is landed.
3. All CI and release tests are passing.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 13:49:01 -07:00
clarng
2b270fd9cb
apply isort uniformly for a subset of directories (#25824)
Simplify isort filters and move it into isort cfg file.

With this change, isort will not longer apply to diffs other than to files that are in whitelisted directory (isort only supports blacklist so we implement that instead) This is much simpler than building our own whitelist logic since our formatter runs multiple codepaths depending on whether it is formatting a single file / PR / entire repo in CI.
2022-06-17 13:40:32 -07:00
Stephanie Wang
d699351748
[datasets] Use generators for merge stage in push-based shuffle (#25336)
This uses the generators introduced in #25247 to reduce memory usage during the merge stage in push-based shuffle. These tasks merge groups of map outputs, so it fits a generator pattern where we want to return merged outputs one at a time. Verified that this allows for merging more/larger objects at a time than the current list-based version.

I also tried this for the map stage in random_shuffle, but it didn't seem to make a difference in memory usage for Arrow blocks. I think this is probably because Arrow is already doing some zero-copy optimizations when selecting rows?

Also adds a new line to Dataset stats for memory usage. Unfortunately it's hard to get an accurate reading of physical memory usage in Python and this value will probably be an overestimate in a lot of cases. I didn't see a difference before and after this PR for the merge stage, for example. Arguably this field should be opt-in. For 100MB partitions, for example:
```
        Substage 0 read->random_shuffle_map: 10/10 blocks executed
        * Remote wall time: 1.44s min, 3.32s max, 2.57s mean, 25.74s total
        * Remote cpu time: 1.42s min, 2.53s max, 2.03s mean, 20.25s total
        * Worker memory usage (MB): 462 min, 864 max, 552 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used

        Substage 1 random_shuffle_reduce: 10/10 blocks executed
        * Remote wall time: 1.47s min, 2.94s max, 2.17s mean, 21.69s total
        * Remote cpu time: 1.45s min, 1.88s max, 1.71s mean, 17.09s total
        * Worker memory usage (MB): 462 min, 1047 max, 831 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used
```


## Checks

- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 12:29:24 -07:00