Commit graph

6856 commits

Author SHA1 Message Date
Stephanie Wang
1f9488724a
[core] Support generators for tasks with multiple return values (#25247)
Adds support for Python generators instead of just normal return functions when a task has multiple return values. This will allow developers to cut down on total memory usage for tasks, as they can free previous return values before allocating the next one on the heap.

The semantics for num_returns are about the same as usual tasks - the function will throw an error if the number of values returned by the generator does not match the number of return values specified by the user. The one difference is that if num_returns=1, the task will throw the usual Python exception that the generator cannot be pickled.

As an example, this feature will allow us to reduce memory usage in Datasets shuffle operations (see #25200 for a prototype).
2022-06-01 13:30:52 -07:00
Antoni Baum
9085ea23ab
[AIR] Improve BatchPredictor performance & disk usage (#25101)
This PR attempts to improve `BatchPredictor` performance with directory checkpoints by avoiding unnecessary filesystem operations.

In order to achieve that, the `Checkpoint` class is changed to always use a canonical path for the temporary directory if the Checkpoint has been created form an object ref. The directory is filelocked to prevent concurrent writes.

Tests have been addded.

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-06-01 21:45:39 +02:00
Eric Liang
905258dbc1
Clean up docstyle in python modules and add LINT rule (#25272) 2022-06-01 11:27:54 -07:00
Jiao
97190e4574
[Deployment Graph] Remove _execute_impl and json serde code for DeploymentNode IR (#25331) 2022-06-01 11:26:56 -07:00
Eric Liang
517f78e2b8
[minor] Add a job submission hook by env var (#25343) 2022-06-01 11:15:43 -07:00
SangBin Cho
ca75570f51
Revert "Revert "Revert "[dataset] Use polars for sorting (#24523)" (#24781)" (#25173)" (#25341)
This reverts commit 61676f26d3.
2022-06-01 10:49:12 -07:00
Chen Shen
49b8bbfd5e
[Core] Fix node affinity strategy when resource is empty (#25344)
Why are these changes needed?
Today, Ray scheduler always pick a random node if the resource requirement is empty, regardless of scheduling policy/strategy.

However, for node affinity scheduling policy, we should not pick random policy but try to stick to the node affinity constraints.
2022-06-01 10:38:48 -07:00
siavash119
21f1e8a5c6
[Core] Use newly pushed actor for existing pending tasks (#24980)
Newly pushed actors will never be used with existing pending submits, so the worker will not be used to speed up existing tasks. If _return_actor is called at the end of push instead, the actor is pushed to _idle_actors and immediately used if there are pending submits.
2022-06-01 07:51:02 -07:00
SangBin Cho
44483a6c99
[Test][Windows] Skip test metrics.py in Windows (#25287)
Skip the flaky test_metrics on Windows
2022-06-01 05:37:29 -07:00
valtab
288a81b42e
[Train]fix train callback nested recusive calling issue (#25015)
Move  initialization for `callback.results_preprocessor` property to `callback.start_training()` method which only be called once while training start, currently initialization is triggered per message.
2022-05-31 20:09:01 -07:00
Eric Liang
acf0da63b6
[data] [API] Remove unnecessary public argument in fully_executed() (#25267) 2022-05-31 16:48:35 -07:00
Eric Liang
5545bc5f45
[data] Fix pipeline pre-repeat caching, and improve the documentation (#25265)
Currently the canonical way to cache a pipeline and repeat it: ds.fully_executed().repeat() crashes. Add a test, fix the docs and stats printing here.
2022-05-31 16:01:00 -07:00
shrekris-anyscale
7754645c83
Revert "[Serve] Deploy Serve deployment graphs via REST API (#25073)" (#25330)
This reverts commit 47709b3300.
2022-05-31 15:37:55 -07:00
shrekris-anyscale
47709b3300
[Serve] Deploy Serve deployment graphs via REST API (#25073) 2022-05-31 10:57:08 -07:00
Eric Liang
00a9dfb5d5
[data] [API] Add max_epoch argument to iter_epochs() for AIR 2022-05-31 10:53:49 -07:00
Philipp Moritz
f61997d90b
Fix typing of gcs_utils.py and add check to CI (#25285) 2022-05-31 10:45:42 -07:00
Eric Liang
c93e37aba5
[Datasets] Fix byte size calculation for non-trivial tensors (#25264)
The range datasource was incorrectly calculating tensor sizes if the dimensions != (1,).

Broken out from https://github.com/ray-project/ray/pull/25167/files
2022-05-31 07:30:41 -07:00
Eric Liang
65f908ea31
[Datasets] Dataset pipeline window by bytes fails when read fusion disabled (#25266)
This fixes AttributeError: 'list' object has no attribute 'schema' when read fusion is flag disabled and pipelines are windowed by bytes.

Broken out from https://github.com/ray-project/ray/pull/25167/files
2022-05-31 07:23:23 -07:00
SangBin Cho
c9cec443dd
[State Observability] Improve existing state output (#25184)
NOTE: This is not the official API improvement. But this will help dogfooding the feature before finalizing the output.

This PR improves the output state/metadata of existing state APIs.
2022-05-30 07:25:28 -07:00
Stephanie Wang
009df65a57
[core] Fix bug in spilling objects that have empty data field (#25192)
Ray sometimes stores errors as the object value in shared memory. These objects have no data since the error is stored in the metadata field. #25085 describes a bug where these objects fail to spill because the IO worker assumes that the data field must be non-empty. This would cause head-of-line blocking for any other objects to spill and cause the whole job to hang. This PR fixes the issue by spilling these objects anyway.
Related issue number

Closes #25085.
2022-05-27 17:18:45 -07:00
Balaji Veeramani
fb22bc5ae3
[AIR] Fix bug where TensorflowPredictor.predict creates extra axis (#25199) 2022-05-27 13:46:12 -07:00
Stephanie Wang
61676f26d3
Revert "Revert "[dataset] Use polars for sorting (#24523)" (#24781)" (#25173)
Polars is significantly faster than the current pyarrow-based sort. This PR uses polars for the internal sort implementation if available. No API changes needed.

On my laptop, this makes sorting 1GB about 2x faster:

without polars

$ python release/nightly_tests/dataset/sort.py --partition-size=1e7 --num-partitions=100
Dataset size: 100 partitions, 0.01GB partition size, 1.0GB total
Finished in 50.23415923118591
...
Stage 2 sort: executed in 38.59s

        Substage 0 sort_map: 100/100 blocks executed
        * Remote wall time: 864.21ms min, 1.94s max, 1.4s mean, 140.39s total
        * Remote cpu time: 634.07ms min, 825.47ms max, 719.87ms mean, 71.99s total
        * Output num rows: 1250000 min, 1250000 max, 1250000 mean, 125000000 total
        * Output size bytes: 10000000 min, 10000000 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

        Substage 1 sort_reduce: 100/100 blocks executed
        * Remote wall time: 125.66ms min, 2.3s max, 1.09s mean, 109.26s total
        * Remote cpu time: 96.17ms min, 1.34s max, 725.43ms mean, 72.54s total
        * Output num rows: 178073 min, 2313038 max, 1250000 mean, 125000000 total
        * Output size bytes: 1446844 min, 18793434 max, 10156250 mean, 1015625046 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

with polars

$ python release/nightly_tests/dataset/sort.py --partition-size=1e7 --num-partitions=100
Dataset size: 100 partitions, 0.01GB partition size, 1.0GB total
Finished in 24.097432136535645
...
Stage 2 sort: executed in 14.02s

        Substage 0 sort_map: 100/100 blocks executed
        * Remote wall time: 165.15ms min, 595.46ms max, 398.01ms mean, 39.8s total
        * Remote cpu time: 349.75ms min, 423.81ms max, 383.29ms mean, 38.33s total
        * Output num rows: 1250000 min, 1250000 max, 1250000 mean, 125000000 total
        * Output size bytes: 10000000 min, 10000000 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

        Substage 1 sort_reduce: 100/100 blocks executed
        * Remote wall time: 21.21ms min, 472.34ms max, 232.1ms mean, 23.21s total
        * Remote cpu time: 29.81ms min, 460.67ms max, 238.1ms mean, 23.81s total
        * Output num rows: 114079 min, 2591410 max, 1250000 mean, 125000000 total
        * Output size bytes: 912632 min, 20731280 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

Related issue number

Closes #23612.
2022-05-27 10:43:51 -07:00
Jiao
820cf4fdca
[Deployment Graph] Simplify our use of DeploymentSchema (#25202) 2022-05-27 10:35:32 -07:00
Balaji Veeramani
692335440b
[AIR] Directly convert TorchPredictor ndarray inputs to tensors (#25190)
If you pass a multidimensional input to `TorchPredictor.predict`, AIR errors. For more information about the error, see #25194.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-27 09:46:47 -07:00
Yi Cheng
0bc04f263e
[core] Remove gcs addr updater in core worker. (#24747)
Since we are using domain name resolution to get the new address of GCS, gcs addr updator is not necessary any more. This PR removed that.
2022-05-26 23:38:19 -07:00
shrekris-anyscale
3234fd3db4
[CI] Bump Bazel version to 4.2.2 (#24242) 2022-05-26 17:09:40 -07:00
Balaji Veeramani
f623c607f2
[AIR] Build model in TensorflowPredictor.predict (#25136)
`TensorflowPredictor.predict` doesn't work right now. For more information, see #25125.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-26 16:42:09 -07:00
Antoni Baum
087e356613
[CI] Make certain AIR tests run (#25229)
Fixes certain AIR tests not running and fixes broken tests.
2022-05-26 15:49:39 -07:00
Balaji Veeramani
59e624348e
[AIR] Run test_torch_predictor.py tests and fix failing test_init (#25207)
The tests in `test_torch_predictor.py` weren't in running CI. Also `test_torch_predictor.py::test_init` was failing.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-26 15:15:20 -07:00
Balaji Veeramani
1ad5e619e1
[AIR] Run test_tensorflow_predictors.py and fix failing tests (#25208)
`test_tensorflow_predictors` wasn't running in CI. This fixes that and also fixes broken tests.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-26 15:15:03 -07:00
Eric Liang
d2f0c3b2f6
Clean up docstyle in data, ml, and tune packages (#25188) 2022-05-26 14:27:20 -07:00
Kai Fricke
d0dfac592a
[tune] Allow iterators in tune.grid_search (#25220)
`tune.choice` already accepts iterables, the same should be true for `tune.grid_search`.
2022-05-26 17:32:39 +02:00
Yi Cheng
7fcea8a8ae
up (#25211) 2022-05-26 07:53:16 -07:00
Kai Fricke
6dac517554
[ci] Protobuf < 4 only in requirements.txt to unblock CI (#25214) 2022-05-26 11:18:14 +02:00
mwtian
fb2933a78f
Revert "Revert "Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets."" (#25031)" (#25057)
Reverts #25031

It looks to be still somewhat flaky.
2022-05-25 19:43:22 -07:00
Yi Cheng
dd3a43b901
[core] Add timeout and asyncio for internal kv. (#25126)
This PR adds timeout and asyncio for internal KV. This only applies to gcs_utils and not ray clients for now since this is purely for ray internal usage.
2022-05-25 18:09:11 -07:00
Stephanie Wang
5cee6135b4
[core] Fix bugs in data locality (#24698) (#25092)
Redo for PR #24698:

This fixes two bugs in data locality:

    When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
    The locality policy did not take into account spilled locations.

Added C++ unit tests and Python tests for the above.

Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
2022-05-25 10:33:49 -07:00
Jiajun Yao
be93bb340d
Remove MLDataset (#25041)
MLDataset is replaced by Ray Dataset.
2022-05-25 09:33:29 -07:00
Kai Fricke
833e357a1f
[air] Do not use gzip for checkpoint dict conversion (#25177)
Gzipping binary data is inefficient and slows down data transfer significantly.
2022-05-25 17:11:00 +02:00
Kai Fricke
67cd984b92
[tune] Add annotations/set scope for Tune classes (#25077)
This PR adds API annotations or changes the scope of several Ray Tune library classes.
2022-05-25 15:21:28 +02:00
Yi Cheng
af895b3676
[gcs] Fix detached actor fail to restart when GCS restarted. (#25131)
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
2022-05-25 01:58:52 -07:00
Siyuan (Ryans) Zhuang
f67871c1f7
[workflow] Fast workflow indexing (#24767)
* workflow indexing

* simplify workflow storage API

* Only fix workflow status when updating the status.

* support status filter
2022-05-24 20:21:08 -07:00
mwtian
fa32cb7c40
Revert "[core] Resubscribe GCS in python when GCS restarts. (#24887)" (#25168)
This reverts commit 7cf4233858.
2022-05-24 18:13:40 -07:00
Stephanie Wang
c8765385cb
[datasets] Fix scheduling strategy propagation and stats collection in push-based shuffle (#25108)
This fixes two bugs in Datasets push-based shuffle:

    Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
    We were only reporting partial stats for the merge stage.

Related issue number

Issue 1 is necessary for performance at large-scale (#24480).
2022-05-24 18:01:40 -07:00
Balaji Veeramani
f032849aa2
[AIR] Don't ravel predictions in TensorflowPrediction.predict (#25138)
`TensorflowPredictor.predict` doesn't correctly produce logits. For more information, see #25137.
2022-05-24 17:38:48 -07:00
xwjiang2010
51dbd99a25
[air] Minor Doc updates (#25097)
Update a few docs and param names.
2022-05-24 17:15:03 -07:00
Philipp Moritz
323605d169
Support file:// for runtime_env working directories in jobs (#25062)
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.

Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-05-24 16:17:18 -07:00
Jiao
f27e85cd7d
[Serve][Deployment Graph][Perf] Add minimal executor DAGNode (#24754)
closes #24475

Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead.

Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds.

Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as 

1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles.
2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph.

Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR.

### Current 10 node chain with deployment graph `.bind()`
```
chain_length: 10, num_clients: 1
latency_mean_ms: 41.05, latency_std_ms: 15.18
throughput_mean_tps: 27.5, throughput_std_tps: 3.2
```

### Using raw deployment handle without dag overhead
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.39, latency_std_ms: 4.57
throughput_mean_tps: 51.9, throughput_std_tps: 1.04
```

### After this PR:
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.35, latency_std_ms: 0.87
throughput_mean_tps: 48.4, throughput_std_tps: 1.43
```
2022-05-24 13:23:38 -05:00
shrekris-anyscale
8b3451318c
[Serve] Update Serve status formatting and processing (#24839) 2022-05-24 11:07:41 -07:00
Kai Fricke
aaee8f09f1
[tune/train] Consolidate checkpoint manager 1: Common checkpoint manager class (#24771)
This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core.

- This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core
- This PR prepares Tune to move to a CheckpointStrategy object
- In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train)

The consolidation is split into three PRs:

1. This PR - adds a common checkpoint manager class.
2. #24772 - based on this PR, adds the integration for Ray Train
3. #24430 - based on #24772, adds the integration for Ray Tune
2022-05-24 19:07:12 +01:00