Commit graph

7055 commits

Author SHA1 Message Date
Kai Fricke
7e93370c91
[tune/structure] Introduce stopper package (#26040)
Split different stoppers and put them in a `stopper` package.
2022-06-23 18:05:43 +01:00
Kai Fricke
8a2f6bda62
[tune/structure] Introduce experiment package (#26033)
Experiment, Trial, and config parsing moves into an `experiment` package.

Notably, the new public facing APIs will be

```
from ray.tune.experiment import Experiment
from ray.tune.experiment import Trial
```
2022-06-23 14:52:46 +01:00
Kai Fricke
edcf7489ef
[hotfix] Fix dashboard cli test (#26034)
#22698 broke master because the dashboard CLI test was not adjusted.
2022-06-23 11:39:09 +01:00
Kai Fricke
0959f44b6f
[tune/structure] Introduce execution package (#26015)
Execution-specific packages are moved to tune.execution.

Co-authored-by: Xiaowei Jiang <xwjiang2010@gmail.com>
2022-06-23 11:13:19 +01:00
Clark Zinzow
caa3868570
[Datasets] Add UDF passthrough args to map_batches(). (#25613)
Users often have UDFs that require arguments only known at Dataset creation time, and sometimes these arguments may be large objects better suited for shared zero-copy access via the object store. One example is batch inference on a large on-CPU model using the actor pool strategy: without putting the large model into the object store and sharing across actors on the same node, each actor worker will need its own copy of the model.

Unfortunately, we can't rely on closing over these object refs, since the object ref will then be serialized in the exported function/class definition, causing the object to be indefinitely pinned and therefore leaked. It's much cleaner to instead link these object refs in as actor creation and task arguments. This PR adds support for threading such object refs through as actor creation and task arguments and supplying the concrete values to the UDFs.
2022-06-22 23:30:09 -07:00
Eric Cousineau
647f6790c0
scripts: Update dashboard output to print URL (#22698)
Useful for Ctrl+Click for certain terminal emulators
2022-06-23 11:33:22 +08:00
Guyang Song
a8ef296649
[runtime env] remove unused runtime env uris from protobuf (#26001) 2022-06-23 10:45:54 +08:00
Yi Cheng
a1f02f68b7
[core][gcs] Make GCS client working with timeout_ms. (#25975)
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because  now, the request will be queued and never got a response. This PR fixed it.

For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
2022-06-22 18:02:29 -07:00
shrekris-anyscale
4d8a82bdf6
[Serve] Use "serve" namespace during controller recovery (#25987) 2022-06-22 16:08:07 -07:00
Sumanth Ratna
67140f2d26
Remove node.py and ray_constants.py links from setup-dev.py (#25997) 2022-06-22 15:45:29 -07:00
Kai Fricke
d65d4aff9a
[tune/structure] Move CLI files into subpackage (#26011)
As part of the Tune restructuring, move the CLI scripts and commands to a cli subpackage.
2022-06-22 23:05:26 +01:00
Kai Fricke
ecf0b93146
[tune/structure] Move AutoML board (#26012)
As part of the Tune restructuring, move the AutoML Board into the automl package.
2022-06-22 21:52:38 +01:00
Chen Shen
afb092a03a
[Core] Out of Disk prevention (#25370)
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.

We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
2022-06-22 12:25:32 -07:00
Amog Kamsetty
d6e8b90236
[AIR] Update TorchPredictor to new Predictor API (#25536) 2022-06-22 09:49:07 -07:00
SangBin Cho
6552e096e6
[State Observability] Summary APIs (#25672)
Task/actor/object summary

Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
2022-06-22 06:21:50 -07:00
xwjiang2010
b4026f9971
[air] RunConfig.failure --> failure_config (#25967) 2022-06-21 16:51:26 -07:00
Eric Liang
43aa2299e6
[api] Annotate as public / move ray-core APIs to _private and add enforcement rule (#25695)
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
2022-06-21 15:13:29 -07:00
Archit Kulkarni
565e366529
[runtime env] Use async internal kv in package download and plugins (#25788)
Uses the async KV API for downloading in the runtime env agent. This avoids the complexity of running the runtime env creation functions in a separate thread.

Some functions are still sync, including the working_dir/py_modules upload, installing wheels, and possibly others.
2022-06-21 15:02:36 -07:00
Antoni Baum
b7d4ae541d
[Train] Move load_checkpoint to utils (#25940)
Moves load_checkpoint methods from trainer files to util files for consistency and better modularity.
2022-06-21 13:03:56 -07:00
shrekris-anyscale
ad12f0cd02
[Serve] Deprecate outdated REST API settings (#25932) 2022-06-21 11:06:45 -07:00
Clark Zinzow
50d47486f2
[Datasets] Add file-extension-based path filter for file-based datasources. (#24822)
This PR adds a format-based file extension path filter for file-based datasources, and sets it as the default path filter. This will allow users to point the read_{format}() API at directories containing a mixture of files, and ensure that only files of the appropriate type are read. This default filter can still be disabled via ray.data.read_csv(..., partition_filter=None).
2022-06-21 11:06:21 -07:00
Tomasz Wrona
7b8ea81f18
[Tune] W&B logging - handle tuples in configs (#24102)
This allows correct logging of tuple entries in configs, e.g. PolicySpec (which is a namedtuple) from multiagent.policies key. Without this, the whole PolicySpec is serialized as a string, which doesn't allow to filter run by specific key from this tuple.
2022-06-21 16:15:55 +01:00
SangBin Cho
411b1d8d2d
[State Observability] Return list instead of dict (#25888)
I’d like to propose a bit changes to the API. Currently we are returning the dict of ID -> value mapping when the list API is returned. But I am thinking to change this to a list because the sort will become ineffective if we return the dictionary. So, it’s ideal we use the list to keep the order (it’s important for deterministic order)

Also, for some APIs, each entry doesn’t have a unique id. For example, list objects will have duplicated object IDs from their entries, which is not working with dict return type (e.g., there can be more than 1 Object ID entry if the object is locally referenced & borrowed by task/pinned in memory)
Also, users can easily build dict index on their own if it is necessary.
2022-06-20 22:49:29 -07:00
Richard Liaw
fa1c6510f7
[hotfix] Revert "Exclude Bazel build files from Ray wheels (#25679)" (#25950)
Nightly wheels are stuck at 736c7b13c4.
2022-06-20 20:59:48 -07:00
matthewdeng
0ddc9d7213
[tune/air] catch pyarrow 8.0.0 error (#25900)
pyarrow 8.0.0 raises ArrowNotImplementedError instead of pyarrow.lib.ArrowInvalid for unrecognized URI.
2022-06-20 15:45:02 +01:00
Stephanie Wang
3de4657cae
[datasets] Use generators for merge stage in push-based shuffle (#25907) 2022-06-18 16:33:54 -07:00
Chen Shen
97582a802d
[Core] update protobuf to 3.19.4 (#25648)
The error message in #25638 indicates we should use protobuf>3.19.0 to generated code so that we can work with python protobuf >= 4.21.1. Try generating wheels to see if this works.
2022-06-18 16:06:56 -07:00
Peyton Murray
815dba542a
[data] Make ActorPoolStrategy kill pool of actors if exception is raised (#25803) 2022-06-17 23:57:58 -07:00
Yi Cheng
9fe3c815ec
[serve] Integrate GCS fault tolerance with ray serve. (#25637)
In this PR, we integrate GCS fault tolerance with ray serve. 

- Add timeout with 5s for kv.


Rollback should be added to all methods, which will come after.

Basic testing for KV timeout in serve and deploy is added.
2022-06-17 23:50:39 -07:00
Stephanie Wang
93aae48b80
[dataset] Pipeline task submission during reduce stage in push-based shuffle (#25795)
Reduce stage in push-based shuffle fails to complete at 100k output partitions or more. This is likely because of driver or raylet load from having too many tasks in flight at once.

We can fix this from ray core too, but for now, this PR adds pipelining for the reduce stage, to limit the total number of reduce tasks in flight at the same time. This is currently set to 2 * available parallelism in the cluster. We have to pick which reduce tasks to submit carefully since these are pinned to specific nodes. The PR does this by assigning tasks round-robin according to the corresponding merge task (which get spread throughout the cluster).

In addition, this PR refactors the map, merge, and reduce stages to use a common pipelined iterator pattern, since they all have a similar pattern of submitting a round of tasks at a time, then waiting for a previous round to finish before submitting more.
Related issue number

Closes #25412.
2022-06-17 17:33:16 -07:00
Clark Zinzow
1701b923bc
[Datasets] [Tensor Story - 2/2] Add "numpy" batch format for batch mapping and batch consumption. (#24870)
This PR adds a NumPy "numpy" batch format for batch transformations and batch consumption that works with all block types. See #24811.
2022-06-17 16:01:02 -07:00
Archit Kulkarni
437f568445
Revert "[datasets] Use generators for merge stage in push-based shuffle (#25336)" (#25898)
This reverts commit d699351748.
2022-06-17 14:25:06 -07:00
xwjiang2010
97f42425da
[air] Consolidate Tune and Train report (#25558)
Consolidate tune/train report/checkpoint functionality by working with a unified Session interface.
The goal of this PR is to establish a solid Session and Session.report path. 
In favor of having less merging conflict (as other folks are doing the whole package renaming) and control the scope of this PR, I have intentionally left out some migration. More PRs to follow. Feel free to comment on the ideal final state. 


To give an idea of the final directory structure. This is a for 2-worker DP training.
```
├── TensorflowTrainer_ce44d_00000_0_2022-06-15_14-40-42
│   ├── checkpoint_000000
│   │   ├── _current_checkpoint_id.meta.pkl
│   │   ├── _preprocessor.meta.pkl
│   │   ├── _timestamp.meta.pkl
│   │   ├── assets
│   │   ├── keras_metadata.pb
│   │   ├── saved_model.pb
│   │   └── variables
│   │       ├── variables.data-00000-of-00001
│   │       └── variables.index
│   ├── events.out.tfevents.1655329242.xw
│   ├── params.json
│   ├── params.pkl
│   ├── progress.csv
│   ├── rank_0
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   ├── rank_1
│   │   └── my_model
│   │       ├── assets
│   │       ├── keras_metadata.pb
│   │       ├── saved_model.pb
│   │       └── variables
│   │           ├── variables.data-00000-of-00001
│   │           └── variables.index
│   └── result.json
├── basic-variant-state-2022-06-15_14-40-42.json
├── experiment_state-2022-06-15_14-40-42.json
├── trainable.pkl
└── tuner.pkl
```
Update:
1. Updated a few classes to be backward compatible - while legacy ray train deprecation is ongoing.
2. Marked all places in 1 using "# TODO(xwjiang): Legacy Ray Train trainer clean up!". So we can easily clean those up once Antoni's work is landed.
3. All CI and release tests are passing.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 13:49:01 -07:00
clarng
2b270fd9cb
apply isort uniformly for a subset of directories (#25824)
Simplify isort filters and move it into isort cfg file.

With this change, isort will not longer apply to diffs other than to files that are in whitelisted directory (isort only supports blacklist so we implement that instead) This is much simpler than building our own whitelist logic since our formatter runs multiple codepaths depending on whether it is formatting a single file / PR / entire repo in CI.
2022-06-17 13:40:32 -07:00
Stephanie Wang
d699351748
[datasets] Use generators for merge stage in push-based shuffle (#25336)
This uses the generators introduced in #25247 to reduce memory usage during the merge stage in push-based shuffle. These tasks merge groups of map outputs, so it fits a generator pattern where we want to return merged outputs one at a time. Verified that this allows for merging more/larger objects at a time than the current list-based version.

I also tried this for the map stage in random_shuffle, but it didn't seem to make a difference in memory usage for Arrow blocks. I think this is probably because Arrow is already doing some zero-copy optimizations when selecting rows?

Also adds a new line to Dataset stats for memory usage. Unfortunately it's hard to get an accurate reading of physical memory usage in Python and this value will probably be an overestimate in a lot of cases. I didn't see a difference before and after this PR for the merge stage, for example. Arguably this field should be opt-in. For 100MB partitions, for example:
```
        Substage 0 read->random_shuffle_map: 10/10 blocks executed
        * Remote wall time: 1.44s min, 3.32s max, 2.57s mean, 25.74s total
        * Remote cpu time: 1.42s min, 2.53s max, 2.03s mean, 20.25s total
        * Worker memory usage (MB): 462 min, 864 max, 552 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used

        Substage 1 random_shuffle_reduce: 10/10 blocks executed
        * Remote wall time: 1.47s min, 2.94s max, 2.17s mean, 21.69s total
        * Remote cpu time: 1.45s min, 1.88s max, 1.71s mean, 17.09s total
        * Worker memory usage (MB): 462 min, 1047 max, 831 mean
        * Output num rows: 12500000 min, 12500000 max, 12500000 mean, 125000000 total
        * Output size bytes: 101562500 min, 101562500 max, 101562500 mean, 1015625000 total
        * Tasks per node: 10 min, 10 max, 10 mean; 1 nodes used
```


## Checks

- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-06-17 12:29:24 -07:00
Stephanie Wang
293c122302
[dataset] Use polars for sorting (#25454) 2022-06-17 12:26:46 -07:00
Clark Zinzow
c2ab73fc40
[Datasets] Add ray_remote_args to read_text. (#23764) 2022-06-17 12:24:11 -07:00
Archit Kulkarni
85be093a84
[runtime env] Make all plugins return a List of URIs (#25825)
Followup from #24622.  This is another step towards pluggability for runtime_env.  Previously some plugin classes had `get_uri` which returned a single URI, while others had `get_uris` which returned a list.  This PR makes all plugins use `get_uris`, which simplifies the code overall.

Most of the lines in the diff just come from the new `format.sh` which sorts the imports.
2022-06-17 14:13:44 -05:00
Stephanie Wang
09857907b7
[data] Fix bug in computing merge partitions in push-based shuffle (#25865)
Fixes a bug in push-based shuffle in computing the merge task <> reduce tasks mapping when the number of reduce tasks does not divide evenly by the number of merge tasks. Previously, if there were N reduce tasks for one merge task, we would do:
[N + 1, N + 1, ..., N + 1, all left over tasks]
which could lead to negative many reduce tasks n the last merge partition.

This PR changes it to:
[N + 1, N + 1, ..., N + 1, N, N, N, ...]
Related issue number

Closes #25863.
2022-06-17 10:19:00 -07:00
Alex Wu
187c21ce20
[gcs] Preserve job driver info for dashboard (#25880)
This PR ensures that GCS keeps the IP and PID information about a job so that it can be used to find the job's logs in the dashboard after the job terminates.

@alanwguo will handle any dashboard work in a separate PR.

Co-authored-by: Alex Wu <alex@anyscale.com>
2022-06-17 09:03:20 -07:00
Kai Fricke
40a9fdcb0f
[tune/air] Fix checkpoint conversion for objects (#25885)
Converting Tracked memory checkpoints was faulty and untested.
2022-06-17 10:41:52 +01:00
Siyuan (Ryans) Zhuang
fea8dd08fc
[workflow] Enhance dataset tests (#25876) 2022-06-16 22:50:31 -07:00
yuduber
26b2faf869
[data] add retry logic to ray.data parquet file reading (#25673) 2022-06-16 21:49:41 -07:00
Jiao
f6735f90c7
[Ray DAG] Move dag project folder out of experimental (#25532) 2022-06-16 19:15:39 -07:00
Clark Zinzow
e111b173e9
[Datasets] Workaround for unserializable Arrow JSON ReadOptions. (#25821)
pyarrow.json.ReadOptions are not picklable until Arrow 8.0.0, which we do not yet support. This PR adds a custom serializer for this type and ensures that said serializer is registered before each Ray task submission.
2022-06-16 18:33:59 -07:00
Stephanie Wang
977fff16a6
Set object spill config from env var (#25794)
Previously it was not possible to set the object spill config from the RAY_xxx environment variable, unlike other system configs. This is because the config is initialized in Python first, before the C++ config is parsed.
2022-06-16 16:08:19 -07:00
Antoni Baum
2120d3ea09
[AIR] Change the name of GBDTTrainable dynamically (#25804) 2022-06-16 13:54:09 -07:00
Simon Mo
d83773b2f1
[Serve][AIR] Support mixed input and output type, with batching (#25688) 2022-06-16 13:00:29 -07:00
Clark Zinzow
04280d6e4e
[Datasets] Preserve cached block metadata on LazyBlockList splits. (#25745)
Preserves cached block metadata on LazyBlockList splits. Before this PR, after these splits, all block metadata would have to be re-fetched.
2022-06-16 12:36:25 -07:00
Clark Zinzow
d98adbc448
[Datasets] Fix tensor extension string formatting (repr). (#25768)
Fixes tensor extension string formatting, e.g. when invoking the DataFrame repr.
2022-06-16 12:35:11 -07:00