Commit graph

12183 commits

Author SHA1 Message Date
Avnish Narayan
6e68b6bef9
[RLlib] DD-PPO training iteration fn. (#24118)
We had unreported merge conflicts with DDPPO. This PR closes and combines #24092, #24035, #24030 and #23096

Co-authored-by: sven1977 <svenmika1977@gmail.com>
2022-04-22 15:22:14 -07:00
xwjiang2010
d7da0d706e
[rllib] Only conditionally import JaxCategorical in catalog.py (#24086)
* Experiment with less imports in catalog.py

* lint
2022-04-22 14:51:35 -07:00
Chen Shen
1d981e0cf1
[doc] fix /cluster/config.html #23720
closes #23560
2022-04-22 10:13:12 -07:00
Avnish Narayan
3bf907bcf8
[RLlib] Don't modify environments via the env checker utilities. (#24083) 2022-04-22 18:39:47 +02:00
Kai Fricke
bb341eb1e4
Revert "Revert "[tune] Also interrupt training when SIGUSR1 received"" (#24101)
* Revert "Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085)"

This reverts commit 00595653ed.

Failure in windows has been addressed by conditionally registering the signal handler if available.
2022-04-22 11:27:38 +01:00
Kai Fricke
0e2dd40451
[tune] reuse_actors per default for function trainables (#24040)
Function trainables don't carry state, so they should be reused per default for performance optimization.
2022-04-22 10:53:54 +01:00
Kai Fricke
9f7170e444
Revert "Revert revert #23906 [RLlib] DD-PPO training iteration function implementation. (#24035)" (#24103)
This reverts commit a337fd994e.
2022-04-22 09:58:58 +01:00
jon-chuang
e6a458a31e
[CI] Create zip of ray session_latest/logs dir on test failure and upload to buildkite via /artifact-mount (#23783)
Creates a zip of session_latest dir with test name and timestamp upon python test failure. Writes to dir specified by env var `RAY_TEST_FAILURE_LOGS_DIR`. Noop if env var does not exist.

Downstream consumer (e.g. CI) can upload all created artifacts in this dir. Thereby, PR submitters can more easily debug their CI failures, especially if they can't repro locally.

Limitations:
- a conftest.py file importing the main ray conftest.py needs to be present in same dir as test. This presents a challenge for e.g. dashboard tests which are highly scattered
2022-04-22 09:48:53 +01:00
Chong-Li
1807cff9b6
Replace the legacy ResourceSet & SchedulingResources at Raylet (#23173) 2022-04-22 14:46:38 +08:00
SangBin Cho
30ab5458a7
[State Observability] Tasks and Objects API (#23912)
This PR implements ray list tasks and ray list objects APIs.

NOTE: You can ignore the merge conflict for now. It is because the first PR was reverted. There's a fix PR open now.
2022-04-21 18:45:03 -07:00
Amog Kamsetty
f500997a65
[AIR] GNN example cleanup (#24080)
Minor cleanup for GNN example
2022-04-21 17:00:31 -07:00
shrekris-anyscale
b51d0aa8b1
[serve] Introduce context.py and client.py (#24067)
Serve stores context state, including the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` in `api.py`. However, these data structures are referenced throughout the codebase, causing circular dependencies. This change introduces two new files:

* `context.py`
    * Intended to expose process-wide state to internal Serve code as well as `api.py`
    * Stores the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` global variables
* `client.py`
    * Stores the definition for the Serve `Client` object, now called the `ServeControllerClient`
2022-04-21 18:35:09 -05:00
Dmitri Gekhtman
8c5fe44542
[KubeRay] Fix autoscaling with GPUs and custom resources, with e2e tests (#23883)
- Closes #23874 by fixing a typo ("num_gpus" -> "num-gpus").
- Adds end-to-end test logic confirming the fix.
- Adds end-to-end test logic confirming autoscaling with custom resources works.
- Slightly refines developer instructions.
- Deflakes test logic a bit by allowing for the event that the head pod changes its identity as the Ray cluster starts up.
2022-04-21 14:54:37 -07:00
xwjiang2010
00595653ed
Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085) 2022-04-21 13:27:34 -07:00
iasoon
c9f0e486ad
[Serve] ensure replica reconfigure runs after allocation check (#24052)
Since remote calls provide no ordering guarantees, it could happen that `reconfigure` gets called before `is_allocated` Since `reconfigure` then runs the user initialization code, the replica actor could get blocked and never provide its allocation check.
This PR ensures that the allocation proof has been received before we run the replica initialization.
2022-04-21 15:24:21 -05:00
Jiao
f0071d30fb
[Serve][Deployment Graph] Let .bind return ray DAGNode types and remove exposing DeploymentNode as public (#24065)
See dag layering summary in https://github.com/ray-project/ray/issues/24061

We need to cleanup and set right ray dag -> serve dag layering where `.bind()` can be called on `@serve.deployment` decorated class or func, but only returns raw Ray DAGNode type, executable by ray core and serve_dag is only available after serve-specific transformations.

Thus this PR removes exposed serve DAGNode type such as DeploymentNode.

It also removes the syntax of `class.bind().bind()` to return a `DeploymentMethodNode` that defaults to `__call__` to match same behavior in ray dag building.
2022-04-21 11:48:48 -07:00
Kai Fricke
238a607f51
[air] Tuner should use run_config from Trainer per default (#24079)
When a `Trainer` is initialized with a run config and then passed into a `Tuner`, it is currently silently discarded and a default RunConfig is used. Instead we should use the run config in trainer if not overridden.
2022-04-21 19:42:57 +01:00
Ian Rodney
0c16bbd245
[AWS] Abort if AZs & SubnetIds mismatch (#22001)
If a user simultaneously selects AZs to use & specifies Subnets not in those AZs, raise an error!
2022-04-21 11:07:59 -07:00
mwtian
02b0d82cf8
[Ray client] return None from internal KV for non-existent keys (#24058)
This fixes the behavior diff between client and non-client internal KV.
2022-04-21 10:55:57 -07:00
Grzegorz Rypeść
dfb9689701
[RLlib] Issue 21489: Unity3D env lacks group rewards (#24016). 2022-04-21 18:49:52 +02:00
Amog Kamsetty
732175e245
[AIR] Add distributed torch_geometric example (#23580)
Add example for distributed pytorch geometric (graph learning) with Ray AIR

This only showcases distributed training, but with data small enough that it can be loaded in by each training worker individually. Distributed data ingest is out of scope for this PR.

Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-04-21 09:48:43 -07:00
Zyiqin-Miranda
e4a66c0e2e
[doc] Add CloudWatch integration documentation (#22638)
This PR adds documentation for Ray CloudWatch integration.
2022-04-21 09:44:41 -07:00
Avnish Narayan
a337fd994e
Revert revert #23906 [RLlib] DD-PPO training iteration function implementation. (#24035) 2022-04-21 17:37:49 +02:00
Qing Wang
c5252c5ceb
[Java] Support parallel actor in experimental. (#21701)
For the purpose to provide an alternative option for running multiple actor instances in a Java worker process, and the eventual goal is to remove the original multi-worker-instances in one worker process implementation.  we're proposing supporting parallel actor concept in Java. This feature enables that users could define some homogeneous parallel execution instances in an actor, and all instances hold one thread as the execution backend.

### Introduction

For the following example, we define a parallel actor with  10 parallelism. The backend actor has 10 concurrency groups for the parallel executions, it also means there're 10 threads for that.

We can access the instance by the instance handle, like:
```java
ParallelActorHandle<A> actor = ParallelActor.actor(A::new).setParallelism(10).remote();
ParallelInstance<A> instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 1000000).remote()); // print 1000000           

instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 3000000

instance = actor.getInstance(/*index=*/ 3);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 2000000
```


### Limitation
- It doesn't support concurrency group on a parallel actor yet.

Co-authored-by: Kai Yang <kfstorm@outlook.com>
2022-04-21 22:54:33 +08:00
Kai Fricke
f376dd8902
[tune] Also interrupt training when SIGUSR1 received (#24015)
Ray Tune currently gracefully stops training on SIGINT. However, the Ray core worker prevents SIGINT (and SIGTERM) to be processed by child tasks, which means that Ray Tune runs that are started in remote tasks (e.g. via Ray client) cannot be gracefully interrupted.

In k8s-based cloud tests that used the Ray client to kick off a Ray Tune run, this lead to test flakiness, as final experiment state could not be gracefully persisted to cloud storage.

This PR adds support for SIGUSR1 in addition to SIGINT to interrupt training gracefully.
2022-04-21 13:07:29 +01:00
Sven Mika
14dd7aac13
[RLlib] Issue 22943: PettingZoo parallel should not use env checking (for now). (#24025) 2022-04-21 11:20:54 +02:00
jon-chuang
ddcc252b51
[Core] Ray logs API (1/n) (#23435)
Expose HTTP endpoint to retrieve logs from ray cluster
2022-04-20 23:11:02 -07:00
Balaji Veeramani
371d1f4533
[Datasets] Make BlockMetadata a dataclass (#23852) 2022-04-20 22:46:25 -07:00
Guyang Song
0e6c042e29
[Bugfix] fix invalid excluding of Black (#24042)
- We should use `--force-exclude` when we pass code path explicitly https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html?highlight=--force-exclude#command-line-options
- Recover the files in `python/ray/_private/thirdparty` which has been formatted in the PR https://github.com/ray-project/ray/pull/21975 by mistake.
2022-04-21 10:21:35 +08:00
Simon Mo
7b0c77dd38
[Serve] Fix torch_tune_serve_test client test (#24031) 2022-04-20 16:52:27 -07:00
iasoon
22a6fafbb5
[Serve] remove constants shorthands in tests (#24053) 2022-04-20 16:05:19 -07:00
Amog Kamsetty
47243ace7c
[Release] Upgrade instance types for xgboost gpu release tests (#24002)
In xgboost 1.6, support for older GPU architectures was removed (dmlc/xgboost#7767).

This PR updates the instance types used in our xgboost-ray gpu release tests to use Volta GPUs instead of Kepler GPUs so that xgboost-ray can run successfully with xgboost v1.6.

Closes #24048
2022-04-20 15:18:22 -07:00
Edward Oakes
4680de8acd
[serve] Deflake test_replica_startup_status_transitions by awaiting signal actor in constructor (#24044)
`test_cluster: test_replica_startup_status_transitions` is periodically flaky with the replica hanging in `PENDING_ALLOCATION`. This could be because there is no ordering guarantee on async actor calls, so the `reconfigure` method might execute first and block the asyncio loop (due to `ray.get`), not allowing the `is_allocated` call to run.
2022-04-20 16:58:45 -05:00
Yi Cheng
04611edf5a
[scheduler] Update syncer API and add reconnect feature. (#23929)
This PR focuses on updating syncer-related code and comments from this #23660 to reduce the code size.

Update Snapshot/Update -> CreateSyncMessage/ConsumeSyncMessage
Make ray syncer test work even when we add more components in the protobuf
Make ray syncer able to reconnect to a new node.
2022-04-20 14:31:24 -07:00
Simon Mo
b0d7888093
[Serve] Allow cloudpickle serializable objects as init args/kwargs (#24034)
Closes #23503 

We are fixing two issue here:
1. The unified controller API used pickle to pack the init args, we are changing it to cloudpickle for now. (this is something I missed during code review)
2. The checkpoint state functionality in controller uses pickle to prevent ray cluster specific state written to checkpoint and unable to recover in a fresh new cluster. However, this recover from new cluster is not good UX and we should prefer an end to end solution like resubmitting via REST API.


As a corollary, the deployment state manager should not care about deserializing replica config and init args. Rather, it should just pass the protobuf directly to replica. I can do that either here or as a follow up.
2022-04-20 15:51:34 -05:00
Eric Liang
6d8d7398df
[runtime_env] Add the ability to inject a setup hook for customization of runtime_env on init (#24036) 2022-04-20 13:27:37 -07:00
Kai Fricke
6353c805fa
[tune] Clean up base ProgressReporter API (#24010)
`set_start_time()` was not implemented for the progress reporter base class, but it's called in `tune.run()`.

Instead of adding new methods to set runtime arguments, this PR moves to a singular and forward-compatible `setup()` method that defaults to no-op. This way custom reporters can make use of runtime information passed to the reporter, but can choose to ignore it per default.
2022-04-20 21:00:23 +01:00
Gagandeep Singh
554831fad1
Increase register timeout seconds (#23223) 2022-04-20 12:25:01 -07:00
Chu Xiangyang
6f74040b15
[Job] Fix typo in job sdk docstring (#23940) 2022-04-20 12:30:32 -05:00
Jiao
3b632ad0d8
[Ray DAG][Serve Deployment Graph] Remove double json.dumps in DAGNode (#24026)
Previously we have double dump behavior that makes json serde not human readable or friendly, but it's required given `DAGDriver` takes `dag_node_json` as first arg and it will appear in YAML.

This PR removes extra `json.dumps()` in encoder path, eliminated and simplified most encoder / object_hooks that are not needed in the first place to make everything simpler again.

Sample YAML now for a complex DAG: https://gist.github.com/jiaodong/32991771e9d78c35767eb24ed73f8236

We're pretty close to have a better minimal JSON representation of the whole dag after this. I might include in this PR or separate one.
2022-04-20 11:57:35 -05:00
Avnish Narayan
477b9d22d2
[RLlib][Training iteration fn] APEX conversion (#22937) 2022-04-20 17:56:18 +02:00
Jiajun Yao
6cfec51d1e
Spread even if nodes are not available (#23445)
Several changes to make spread scheduling work better under load:

* When nodes are not available, spread among feasible nodes.
* If grant_or_reject is true, don't spill back if the selected node is not available.
* Don't spill due to waiting for dependencies for spread tasks.
2022-04-20 07:35:15 -07:00
Kai Fricke
261a8a7470
[air] Use custom fsspec handler for GS (#24008)
`gcsfs` complains about an invalid `create_parents` argument when using google cloud storage with cloud checkpoints. Thus we should use an alternative fs spec handler that omits this argument for gs.

The root issue will be fixed here: https://github.com/fsspec/gcsfs/pull/471
2022-04-20 14:51:43 +01:00
Antoni Baum
9364ec39e4
[joblib] Make PoolActor's Ray options configurable (#24009)
Makes it possible to configure joblib/multiprocessing `PoolActor`s' Ray options for greater user control. Also adds some type hints.
2022-04-20 06:38:30 -07:00
xwjiang2010
a34dcfce85
[tune] fix flaky test (#24037) 2022-04-20 10:14:32 +01:00
mwtian
34fb092656
[Pubsub] reduce memory usage for channels that do not require total memory cap (#23985)
In a1e06f64ae, memory bound was added for each subscribed entity in the publisher. It adds two extra `std::deque` per subscribed entity, which turns out to cost a lot more memory when there are a large number of `ObjectRef`s: https://github.com/ray-project/ray/pull/23853#issuecomment-1098382286

This PR avoids the extra memory usage for entities in channels unlikely to grow too large, i.e. all channels except those for logs and error info. Subscribed entity memory usage no longer shows up in the memory profile when there are 1M object refs. Raw data: [profile006.pb.gz](https://github.com/ray-project/ray/files/8508547/profile006.pb.gz)
2022-04-19 17:44:15 -07:00
Antoni Baum
2169007290
[AIR] SklearnTrainer&Predictor implementation (#23850)
Implements `SklearnTrainer` and `SklearnPredictor`. Full parallelism with joblib + support for GPU enabled estimators like cuML.

Interface has been modified slightly by addition of several arguments, which were required for full functionality.

I haven't tested cuML yet, will do it later.

Depends on https://github.com/ray-project/ray/pull/23889

Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-04-19 16:45:17 -07:00
Avnish Narayan
0ddbce6518
Revert "[RLlib] DD-PPO training iteration fn (#23906)" (#24030)
The DDPPO LR scheduler test is broken because the learner_info_dictionary that is returned by the training iteration function does not consistently return a learner info for every training iteration, but the test expects that it does.

We'll need to fix the test then re-merge

Reverts #23906
2022-04-19 16:43:57 -07:00
Jiao
5ba29f040f
[Serve] Clean up deployment suffixes between pipeline build() calls (#23984) 2022-04-19 15:59:42 -07:00
mwtian
3af7fb6490
[Ray client] use SimpleQueue on Python 3.7 and newer in async dataclient (#23995) 2022-04-19 13:30:56 -07:00