Commit graph

12689 commits

Author SHA1 Message Date
Balaji Veeramani
14a05ee2f7
Add documentation issue template (#25116) 2022-05-24 23:47:22 -07:00
Eric Liang
4963dfaae0
[api] Add API stability annotations for all RLlib symbols and add to LINT (#25060) 2022-05-24 22:14:25 -07:00
Siyuan (Ryans) Zhuang
f67871c1f7
[workflow] Fast workflow indexing (#24767)
* workflow indexing

* simplify workflow storage API

* Only fix workflow status when updating the status.

* support status filter
2022-05-24 20:21:08 -07:00
mwtian
fa32cb7c40
Revert "[core] Resubscribe GCS in python when GCS restarts. (#24887)" (#25168)
This reverts commit 7cf4233858.
2022-05-24 18:13:40 -07:00
Stephanie Wang
f7692e4602
[core] Remove more expensive shuffle tests (#25165)
Now that the "smaller_instances" versions of these tests are stable, we can stop running the version that uses bigger instances.
2022-05-24 18:05:18 -07:00
Stephanie Wang
c8765385cb
[datasets] Fix scheduling strategy propagation and stats collection in push-based shuffle (#25108)
This fixes two bugs in Datasets push-based shuffle:

    Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
    We were only reporting partial stats for the merge stage.

Related issue number

Issue 1 is necessary for performance at large-scale (#24480).
2022-05-24 18:01:40 -07:00
Balaji Veeramani
f032849aa2
[AIR] Don't ravel predictions in TensorflowPrediction.predict (#25138)
`TensorflowPredictor.predict` doesn't correctly produce logits. For more information, see #25137.
2022-05-24 17:38:48 -07:00
xwjiang2010
51dbd99a25
[air] Minor Doc updates (#25097)
Update a few docs and param names.
2022-05-24 17:15:03 -07:00
mwtian
f79b826f31
[Dashboard] avoid showing disk info when it is unavailable (#24992) 2022-05-24 17:13:47 -07:00
Philipp Moritz
323605d169
Support file:// for runtime_env working directories in jobs (#25062)
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.

Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-05-24 16:17:18 -07:00
Jiajun Yao
00cdd8dce5
Add chaos test for dataset shuffle (#25161)
Add chaos tests for dataset shuffle: both push-based and non-push-based.
2022-05-24 15:12:20 -07:00
Jun Gong
93ff0beb4e
[RLlib] Introduce utils to serialize gym Spaces (and thus ViewRequirements). (#25007) 2022-05-24 21:12:20 +02:00
Jiao
f27e85cd7d
[Serve][Deployment Graph][Perf] Add minimal executor DAGNode (#24754)
closes #24475

Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead.

Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds.

Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as 

1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles.
2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph.

Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR.

### Current 10 node chain with deployment graph `.bind()`
```
chain_length: 10, num_clients: 1
latency_mean_ms: 41.05, latency_std_ms: 15.18
throughput_mean_tps: 27.5, throughput_std_tps: 3.2
```

### Using raw deployment handle without dag overhead
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.39, latency_std_ms: 4.57
throughput_mean_tps: 51.9, throughput_std_tps: 1.04
```

### After this PR:
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.35, latency_std_ms: 0.87
throughput_mean_tps: 48.4, throughput_std_tps: 1.43
```
2022-05-24 13:23:38 -05:00
shrekris-anyscale
8b3451318c
[Serve] Update Serve status formatting and processing (#24839) 2022-05-24 11:07:41 -07:00
Jiajun Yao
b825a839f9
Mark dataset_shuffle_push_based_sort_1tb as stable (#25162)
dataset_shuffle_push_based_sort_1tb is consistently passing for weeks.
2022-05-24 11:07:27 -07:00
Kai Fricke
aaee8f09f1
[tune/train] Consolidate checkpoint manager 1: Common checkpoint manager class (#24771)
This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core.

- This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core
- This PR prepares Tune to move to a CheckpointStrategy object
- In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train)

The consolidation is split into three PRs:

1. This PR - adds a common checkpoint manager class.
2. #24772 - based on this PR, adds the integration for Ray Train
3. #24430 - based on #24772, adds the integration for Ray Tune
2022-05-24 19:07:12 +01:00
Jiajun Yao
603cba646a
Use OBOD report as the source of truth of in-memory locations of object (#25004)
* hang

* update

* up

* up

* comment
2022-05-24 10:25:44 -07:00
Nintorac
81c0b24164
[tune/docs] fix typo (#25109) 2022-05-24 18:20:10 +01:00
Jimmy Yao
9e3c88d727
[GCP] Update TPU Region (#25123)
changed, there is no `central_f1` now.
2022-05-24 09:38:06 -07:00
Kai Fricke
6a4b361886
[ludwig] Upgrade jsonschema for ludwig tests (#25155)
Ludwig 0.5.1 requires jsonschema>4, so we have to install it in the test environment.

Related: ludwig-ai/ludwig#2055
2022-05-24 17:05:04 +01:00
Edward Oakes
65d21b7ae6
[job submission] Handle env_vars: None case properly in supervisor runtime_env logic (#25087) 2022-05-24 11:01:19 -05:00
Artur Niederfahrenhorst
d76ef9add5
[RLLib] Fix RNNSAC example failing on CI + fixes for recurrent models for other Q Learning Algos. (#24923) 2022-05-24 14:39:43 +02:00
mwtian
7013b32d15
[Release] prefer last cluster env version in release tests (#24950)
Currently the release test runner prefers the first successfully version of a cluster env, instead of the last version. But sometimes a cluster env may build successfully on Anyscale but cannot launch cluster successfully (e.g. version 2 here) or new dependencies need to be installed, so a new version needs to be built. The existing logic always picks up the 1st successful build and cannot pick up the new cluster env version.

Although this is an edge case (tweaking cluster env versions, with the same Ray wheel or cluster env name), I believe it is possible for others to run into it.

Also, avoid running most of the CI tests for changes under release/ray_release/.
2022-05-24 13:26:54 +01:00
Kai Fricke
804b6b11d1
Revert "[Java] Remove RayRuntimeInternal class (#25016)" (#25139)
This reverts commit 4026b38b09.

Broke test_raydp_dataset
2022-05-24 13:17:47 +01:00
SangBin Cho
a7e759317b
[State Observability API] Error handling (#24413)
This improves error handling per https://docs.google.com/document/d/1IeEsJOiurg-zctOcBjY-tQVbsCmURFSnUCTkx_4a7Cw/edit#heading=h.pdzl9cil9e8z (the RPC part).

Semantics
If all queries to the source failed, raise a RayStateApiException.

If partial queries are failed, warnings.warn the partial failure when print_api_stats=True. It is true for CLI. It is false when it is used within Python API or json / yaml format is required.
2022-05-24 03:56:49 -07:00
Sven Mika
e73c37cc17
[RLlib] MADDPG: Move into main algorithms folder and add proper unit and learning tests. (#24579) 2022-05-24 12:53:53 +02:00
Adrish Dey
e7e75b46e1
[tune] rolling back wandb service. replacing deprecated wandb methods (#25132)
Follow up: #24017

Briefly, wandb service is still in experimental stage, and is not ready to be released as an integration without extensive testing. Hence, we are interested in rolling back the update to the integration we made recently, until this feature is ready to be shipped.
2022-05-24 11:22:11 +01:00
Sven Mika
4e99a57bab
[RLlib] Add @OverrideToImplementCustomLogic decorators to some Trainer class methods. (#24684) 2022-05-24 11:30:50 +02:00
Gagandeep Singh
5b9b4fa018
Ignore previous tasks before submitting ones via map and map_unordered (#23684) 2022-05-24 00:20:58 -07:00
Dmitri Gekhtman
806c187878
[autoscaler] Flush stdout and stdin when running commands. (#19473)
Flush command stdout/stderr before exiting CommandRunner.run, so that setup command output is less likely to get swallowed.
2022-05-23 23:17:30 -07:00
Jiajun Yao
b05b28b209
HandlePinObjectIds should return error if the object doesn't exist (#25104)
When the primary copy of an object is lost, owner will try to pin the secondary copy. In the meantime, the secondary copy might be evicted. In this case, the PinObjectIDs rpc call should return error to let the owner know that the pin failed. Otherwise the owner will mistakenly think the secondary copy is pinned.
2022-05-23 23:12:03 -07:00
xwjiang2010
8703d5e9d0
[air preprocessor] Add limit to OHE. (#24893) 2022-05-23 22:37:15 -07:00
Balaji Veeramani
da5cf93d97
Create .git-blame-ignore-revs for black formatting (#25118) 2022-05-23 21:55:57 -07:00
Zhe Zhang
873c44d984
[Docs] Add "Examples" block to Ray Data landing page, and consistently use bold font (#24994) 2022-05-23 21:22:00 -07:00
Qing Wang
4026b38b09
[Java] Remove RayRuntimeInternal class (#25016)
Due to we have already removed the multiple workers in one process, remove RayRuntimeInternal for purpose.
2022-05-24 09:22:48 +08:00
Yi Cheng
7cf4233858
[core] Resubscribe GCS in python when GCS restarts. (#24887)
This is a follow-up PRs of https://github.com/ray-project/ray/pull/24813 and https://github.com/ray-project/ray/pull/24628

Unlike the change in cpp layer, where the resubscription is done by GCS broadcast a request to raylet/core_worker and the client-side do the resubscription, in the python layer, we detect the failure in the client-side.

In case of a failure, the protocol is:

1. call subscribe
2. if timeout when doing resubscribe, throw an exception and this will crash the system. This is ok because when GCS has been down for a time longer than expected, we expect the ray cluster to be down.
3. continue to poll once subscribe ok.

However, there is an extreme case where things might be broken: the client might miss detecting a failure.

This could happen if the long-polling has been returned and the python layer is doing its own work. And before it sends another long-polling, GCS restarts and recovered. 

Here we are not going to take care of this case because:
1. usually GCS is going to take several seconds to be up and the python layer's work is simply pushing data into a queue (sync version). For the async version, it's only used in Dashboard which is not a critical component.
2. pubsub in python layer is not doing critical work: it handles logs/errors for ray job;
3. for the dashboard, it can just restart to fix the issue.


A known issue here is that we might miss logs in case of GCS failure due to the following reasons:

- py's pubsub is only doing best effort publishing. If it failed too many times, it'll skip publishing the message (lose messages from producer side)
- if message is pushed to GCS, but the worker hasn't done resubscription yet, the pushed message will be lost (lose messages from consumer side)

We think it's reasonable and valid behavior given that the logs are not defined to be a critical component and we'd like to simplify the design of pubsub in GCS.

Another things is `run_functions_on_all_workers`. We'll plan to stop using it within ray core and deprecate it in the longer term. But it won't cause a problem for the current cases because:

1. It's only set in driver and we don't support creating a new driver when GCS is down.
2. When GCS is down, we don't support starting new ray workers.

And `run_functions_on_all_workers` is only used when we initialize driver/workers.
2022-05-23 13:06:33 -07:00
Antoni Baum
36b1b4ce0c
Fix filelock in _delete_path (#25093) 2022-05-23 20:58:02 +01:00
Balaji Veeramani
50c31b8466
[Data] Add partitioning classes to Data API reference (#24203) 2022-05-23 09:34:41 -07:00
shrekris-anyscale
b9fb902a4b
Revert "[serve] Use soft constraint for placing controller on the head node (#24934)" (#25050)
This reverts commit 737d16328c.
2022-05-23 11:31:23 -05:00
Sven Mika
37799751df
[Serve + RLlib] Fix serve tutorial_rllib for Win. PyGame needs to be installed as of gym==0.23. (#25080) 2022-05-23 17:43:35 +02:00
Guyang Song
1bc91a4129
[doc] Add info about eager_install to runtime_env FAQ (#25008) 2022-05-23 10:26:57 -05:00
Archit Kulkarni
a67c8a0739
[runtime_env] Add temporary URI reference to prevent URI deletion before job starts (#24719)
Packages are uploaded to the GCS for `runtime_env`.  These packages are garbage collected when their refcount becomes zero.

The problem is the reference doesn't get incremented until the job starts, which happens after the package is uploaded.  It's possible for the package's refcount to go to zero in between the upload and when the job starts, causing the package to be deleted before it's needed by the job.  It's likely the cause of https://github.com/ray-project/ray/issues/23423.

We can't just increment the refcount at the time of upload, because if the script is killed before the job is started (e.g. via Ctrl-C) then the reference will never be decremented and the package will never be deleted.

The solution in this PR is to increment the refcount at the time of upload, but automatically decrement after a configurable timeout (default 30s).  This should be enough time for the job to start.  When the job starts, it increments the refcount as usual and decrements it when the job finishes or is killed.

Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
2022-05-23 10:25:04 -05:00
mwtian
50d49a2d7a
[Core] use higher niceness for workers (#24928)
Looking at past failures of dataset_shuffle_push_based_random_shuffle_1tb and when running it on my own, I noticed that raylets are killed because GCS was not able to respond to it in time. It seems at the beginning of the run, there is a huge CPU spike which starved GCS out of CPU. With the same spirit of adjusting workers to higher OOM scores, we can give workers higher niceness so they yield CPU to GCS, Raylet and other user processes.

I ran dataset_shuffle_push_based_random_shuffle_1tb a few time which no longer sees raylet death because of GCS CPU starvation. But there are other issues making the test fail which I will continue to investigate.
2022-05-23 08:12:51 -07:00
Kai Fricke
bcf77f38ee
[ci] Add second bazel mirror (#24913)
Builds are currently failing because `mirror.bazel.build`'s SSL certificate expired. This PR adds another bazel mirror to avoid this problem.

Builds are still failing because https://github.com/jupp0r/prometheus-cpp explicitly lists `mirror.bazel.build`.
2022-05-23 12:01:40 +01:00
Sven Mika
ec89fe5203
[RLlib] APEX-DQN and R2D2 config objects. (#25067) 2022-05-23 12:15:45 +02:00
Guyang Song
c6edfdd2a0
[script] expose options of xxx_port in 'ray start' command (#24919) 2022-05-23 17:18:09 +08:00
Eric Liang
d57cdd19ac
[tune] Fix stray extra log from runtime_env setup (#25071)
commit 40774ac219
Author: Qing Wang <kingchin1218@gmail.com>
Date:   Tue May 17 11:33:59 2022 +0800

    Minor changes for Java runtime env. (#24840)

Introduced an extra log message that spams stdout when running with Tune. Move this log line to debug and add an e2e test check.
2022-05-23 09:54:24 +01:00
Sven Mika
dea9b86a16
[RLlib] MAML config objects. (#25066) 2022-05-23 10:14:24 +02:00
Sven Mika
baf8c2fa1e
[RLlib] TD3 config objects. (#25065) 2022-05-23 10:07:13 +02:00
Sven Mika
09886d7ab8
[RLlib] Upgrade gym 0.23 (#24171) 2022-05-23 08:18:44 +02:00