Commit graph

12969 commits

Author SHA1 Message Date
Balaji Veeramani
59e624348e
[AIR] Run test_torch_predictor.py tests and fix failing test_init (#25207)
The tests in `test_torch_predictor.py` weren't in running CI. Also `test_torch_predictor.py::test_init` was failing.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-26 15:15:20 -07:00
Balaji Veeramani
1ad5e619e1
[AIR] Run test_tensorflow_predictors.py and fix failing tests (#25208)
`test_tensorflow_predictors` wasn't running in CI. This fixes that and also fixes broken tests.

Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
2022-05-26 15:15:03 -07:00
Eric Liang
d2f0c3b2f6
Clean up docstyle in data, ml, and tune packages (#25188) 2022-05-26 14:27:20 -07:00
Amog Kamsetty
e8440cf52b
[AIR] Incremental Learning Example (#24420)
Example for domain incremental learning on Permuted MNIST Dataset with naive strategy
2022-05-26 12:28:28 -07:00
kourosh hakhamaneshi
9684ea3af6
[RLlib] Fix TorchPolicyV2 bug. (#25203) 2022-05-26 20:49:26 +02:00
Kai Fricke
c90dacb09b
[ci/release] Use fullmatch instead of match for regex filters (#25225)
Currently, `name:many_actors` matches e.g. `many_actors` and `many_actors_smoke_test`, but it should just match one test. Thus we should use `re.fullmatch` instead of `re.match` (which would require `name:many_actors.*` to match both.
2022-05-26 20:02:00 +02:00
Stephanie Wang
5b304b60c4
Don't adjust OOM for IO workers (#25171)
We should try to avoid killing IO workers, since this can disrupt spilling and add a lot of load on Ray core. This will make it so that we prioritize workers running application tasks instead.
2022-05-26 10:47:21 -07:00
Kai Fricke
2cf20e5406
[ci/release] Use 1.12.1 as base image in app configs (#25216)
Many release tests are currently failing for cuda version incompatibilities. Pinning the base image to 1.12.1 seems to resolve the problem for the time being.
2022-05-26 18:58:20 +02:00
Kai Fricke
d0dfac592a
[tune] Allow iterators in tune.grid_search (#25220)
`tune.choice` already accepts iterables, the same should be true for `tune.grid_search`.
2022-05-26 17:32:39 +02:00
Yi Cheng
7fcea8a8ae
up (#25211) 2022-05-26 07:53:16 -07:00
Amog Kamsetty
983d8b3db2
[AIR] Fix failing CI on master (#25201)
The AIR CI build has been failing on master since #25022.

#25022 moved the tests that require credentials, but we left the bazel command in the build pipeline still. So even though all the tests are passing, the buildkite stage itself was failing since it tries run tests that require credentials, but these tests no longer exist in the directory. This is only a problem for master build since we don't run this command for PR builds.
2022-05-26 11:34:57 +02:00
Kai Fricke
6dac517554
[ci] Protobuf < 4 only in requirements.txt to unblock CI (#25214) 2022-05-26 11:18:14 +02:00
Qing Wang
65d863d349
Revert "Revert "[Java] Remove RayRuntimeInternal class (#25016)" (#25… (#25153)
This reverts commit 804b6b11d1.
2022-05-26 14:15:51 +08:00
xwjiang2010
ff1fb9b5a2
[air example] train a Keras model on tabular data and serve it. (#24898) 2022-05-25 22:19:35 -07:00
mwtian
fb2933a78f
Revert "Revert "Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets."" (#25031)" (#25057)
Reverts #25031

It looks to be still somewhat flaky.
2022-05-25 19:43:22 -07:00
mwtian
b2d41fc427
[Doc] update docker readme files to include Python versions (#25099)
Similar to #25053, update the documentations on the docker site.
2022-05-25 19:42:24 -07:00
Yi Cheng
dd3a43b901
[core] Add timeout and asyncio for internal kv. (#25126)
This PR adds timeout and asyncio for internal KV. This only applies to gcs_utils and not ray clients for now since this is purely for ray internal usage.
2022-05-25 18:09:11 -07:00
Sihan Wang
4de3ce5c25
[Serve][Doc] Add deploy graph about control_flow_based_on_user_inputs pattern doc (#24871) 2022-05-25 15:38:23 -07:00
Lixin Wei
ac620aeec0
[build] Add tools to generate compile_commands.json (#25180)
We want to use `clangd` as the language server.

`clangd` is an awesome language server that has many features and is very accurate.

But it needs a `compile_commands.json` to work.

This PR adds a popular bazel rule to generate this file.
2022-05-25 11:58:14 -07:00
kvaithin
e407953f95
[AIR] Change key to model in TensorflowTrainer (#25183)
As described in the related issue, using model_weight as the key throws an error.
This update points the user to use model as the key instead.

Co-authored-by: tamilflix <tamilflix30@gmail.com>
2022-05-25 19:56:46 +02:00
Stephanie Wang
5cee6135b4
[core] Fix bugs in data locality (#24698) (#25092)
Redo for PR #24698:

This fixes two bugs in data locality:

    When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
    The locality policy did not take into account spilled locations.

Added C++ unit tests and Python tests for the above.

Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
2022-05-25 10:33:49 -07:00
javi-redondo
a8fc0c5015
Add landing & key concepts pages for clusters (#24379)
Add landing & key concepts pages for clusters
2022-05-25 10:23:50 -07:00
Jiajun Yao
be93bb340d
Remove MLDataset (#25041)
MLDataset is replaced by Ray Dataset.
2022-05-25 09:33:29 -07:00
Avnish Narayan
eaed256d68
[RLlib] Async parallel execution manager. (#24423) 2022-05-25 17:54:08 +02:00
Antoni Baum
2b6c6301e2
[CI] Fix typo in CI label (#25185) 2022-05-25 17:31:29 +02:00
Kai Fricke
833e357a1f
[air] Do not use gzip for checkpoint dict conversion (#25177)
Gzipping binary data is inefficient and slows down data transfer significantly.
2022-05-25 17:11:00 +02:00
Kai Fricke
67cd984b92
[tune] Add annotations/set scope for Tune classes (#25077)
This PR adds API annotations or changes the scope of several Ray Tune library classes.
2022-05-25 15:21:28 +02:00
Jun Gong
eaf9c941ae
[RLlib] Migrate PPO Impala and APPO policies to use sub-classing implementation. (#25117) 2022-05-25 14:38:03 +02:00
Yi Cheng
af895b3676
[gcs] Fix detached actor fail to restart when GCS restarted. (#25131)
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
2022-05-25 01:58:52 -07:00
Vasilios Mavroudis
edca96353f
[RLlib] Curiosity Bug Fix. (#24880) 2022-05-25 09:31:34 +02:00
Balaji Veeramani
14a05ee2f7
Add documentation issue template (#25116) 2022-05-24 23:47:22 -07:00
Eric Liang
4963dfaae0
[api] Add API stability annotations for all RLlib symbols and add to LINT (#25060) 2022-05-24 22:14:25 -07:00
Siyuan (Ryans) Zhuang
f67871c1f7
[workflow] Fast workflow indexing (#24767)
* workflow indexing

* simplify workflow storage API

* Only fix workflow status when updating the status.

* support status filter
2022-05-24 20:21:08 -07:00
mwtian
fa32cb7c40
Revert "[core] Resubscribe GCS in python when GCS restarts. (#24887)" (#25168)
This reverts commit 7cf4233858.
2022-05-24 18:13:40 -07:00
Stephanie Wang
f7692e4602
[core] Remove more expensive shuffle tests (#25165)
Now that the "smaller_instances" versions of these tests are stable, we can stop running the version that uses bigger instances.
2022-05-24 18:05:18 -07:00
Stephanie Wang
c8765385cb
[datasets] Fix scheduling strategy propagation and stats collection in push-based shuffle (#25108)
This fixes two bugs in Datasets push-based shuffle:

    Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
    We were only reporting partial stats for the merge stage.

Related issue number

Issue 1 is necessary for performance at large-scale (#24480).
2022-05-24 18:01:40 -07:00
Balaji Veeramani
f032849aa2
[AIR] Don't ravel predictions in TensorflowPrediction.predict (#25138)
`TensorflowPredictor.predict` doesn't correctly produce logits. For more information, see #25137.
2022-05-24 17:38:48 -07:00
xwjiang2010
51dbd99a25
[air] Minor Doc updates (#25097)
Update a few docs and param names.
2022-05-24 17:15:03 -07:00
mwtian
f79b826f31
[Dashboard] avoid showing disk info when it is unavailable (#24992) 2022-05-24 17:13:47 -07:00
Philipp Moritz
323605d169
Support file:// for runtime_env working directories in jobs (#25062)
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.

Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-05-24 16:17:18 -07:00
Jiajun Yao
00cdd8dce5
Add chaos test for dataset shuffle (#25161)
Add chaos tests for dataset shuffle: both push-based and non-push-based.
2022-05-24 15:12:20 -07:00
Jun Gong
93ff0beb4e
[RLlib] Introduce utils to serialize gym Spaces (and thus ViewRequirements). (#25007) 2022-05-24 21:12:20 +02:00
Jiao
f27e85cd7d
[Serve][Deployment Graph][Perf] Add minimal executor DAGNode (#24754)
closes #24475

Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead.

Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds.

Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as 

1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles.
2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph.

Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR.

### Current 10 node chain with deployment graph `.bind()`
```
chain_length: 10, num_clients: 1
latency_mean_ms: 41.05, latency_std_ms: 15.18
throughput_mean_tps: 27.5, throughput_std_tps: 3.2
```

### Using raw deployment handle without dag overhead
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.39, latency_std_ms: 4.57
throughput_mean_tps: 51.9, throughput_std_tps: 1.04
```

### After this PR:
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.35, latency_std_ms: 0.87
throughput_mean_tps: 48.4, throughput_std_tps: 1.43
```
2022-05-24 13:23:38 -05:00
shrekris-anyscale
8b3451318c
[Serve] Update Serve status formatting and processing (#24839) 2022-05-24 11:07:41 -07:00
Jiajun Yao
b825a839f9
Mark dataset_shuffle_push_based_sort_1tb as stable (#25162)
dataset_shuffle_push_based_sort_1tb is consistently passing for weeks.
2022-05-24 11:07:27 -07:00
Kai Fricke
aaee8f09f1
[tune/train] Consolidate checkpoint manager 1: Common checkpoint manager class (#24771)
This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core.

- This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core
- This PR prepares Tune to move to a CheckpointStrategy object
- In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train)

The consolidation is split into three PRs:

1. This PR - adds a common checkpoint manager class.
2. #24772 - based on this PR, adds the integration for Ray Train
3. #24430 - based on #24772, adds the integration for Ray Tune
2022-05-24 19:07:12 +01:00
Jiajun Yao
603cba646a
Use OBOD report as the source of truth of in-memory locations of object (#25004)
* hang

* update

* up

* up

* comment
2022-05-24 10:25:44 -07:00
Nintorac
81c0b24164
[tune/docs] fix typo (#25109) 2022-05-24 18:20:10 +01:00
Jimmy Yao
9e3c88d727
[GCP] Update TPU Region (#25123)
changed, there is no `central_f1` now.
2022-05-24 09:38:06 -07:00
Kai Fricke
6a4b361886
[ludwig] Upgrade jsonschema for ludwig tests (#25155)
Ludwig 0.5.1 requires jsonschema>4, so we have to install it in the test environment.

Related: ludwig-ai/ludwig#2055
2022-05-24 17:05:04 +01:00