Commit graph

12208 commits

Author SHA1 Message Date
Chen Shen
cb8d216e62
[Doc][Ray collectives] fix example in the doc. #24162
the example is broken. this pr fixes it.
2022-04-25 11:20:51 -07:00
Brett Göhre
9e0a59d94a
[docs] search algorithm notebook examples (#23924)
Co-authored-by: brettskymind <brett@pathmind.com>
Co-authored-by: Max Pumperla <max.pumperla@googlemail.com>
2022-04-25 11:10:58 -07:00
Jiao
55b1d857ab
[Serve] Fix deployment func no args called with python (#24096)
Our current behavior is dropping all args / kwargs for both http and python, if user deployment function doesn't take any input. But in the meantime we didn't throw anything if user tries to invoke the function in python with actual args.

This PR adds this back, and added a bit special handling for http case with in-line comments.
2022-04-25 11:15:44 -05:00
Jiao
2124087828
[Serve][Deployment Graph] Add test for ray core and serve dag class method call chain (#24115)
Now given we directly return `ClassMethodNode` on `deployment_cls.bind()`, add a test to ensure chain of ClassMethod calls is consistent across ray dag and serve dag.

Note this only works on single replica, since if the class method mutates replica state, and there're multiple replicas running, replica states / result won't be consistent if request are routed to different ones.
2022-04-25 11:15:06 -05:00
Stephanie Wang
1de9f3457e
[nightly tests] Mark Datasets shuffle tests stable (#24175)
dataset_shuffle_random_shuffle_1tb was previously failing due to OOM but has now passed on the last 4 runs due to changing the node type. These tests should be stable now, although we will want to look into the OOM issue later.
2022-04-25 09:01:37 -07:00
Clark Zinzow
a539a01145
[Datasets] Add support for write task remote options. (#24160)
Users may want to provide Ray task option overrides for write tasks, e.g. having write tasks retried on application-level exceptions (retry_exceptions=True) or change the default number of retries (max_retries=8). This commit adds support for providing such task options for write tasks.
2022-04-25 07:52:53 -07:00
Xuehai Pan
6087eda91b
[RLlib] Issue 21991: Fix SampleBatch slicing for SampleBatch.INFOS in RNN cases (#22050) 2022-04-25 11:40:24 +02:00
Noon van der Silk
3589c21924
[RLlib] Fix some missing f-strings and a f-string related bug in tf eager policy. (#24148) 2022-04-25 11:25:28 +02:00
Fabian Witter
56bc90ca72
[RLlib] Remove Unnecessary List Conversion of Complex Observations in SAC Models (torch and tf). (#24106) 2022-04-25 11:21:34 +02:00
Jeroen Bédorf
1263015931
[RLlib] Add support for writing env 'info' dicts to output datasets for TFPolicies (for TorchPolicies, these are part of the view-requirements by default and thus written either way). (#24041) 2022-04-25 11:17:50 +02:00
ZhuSenlin
edf058d4f7
improve exponential backoff when connecting to the redis (#24150) 2022-04-25 16:10:24 +08:00
Artur Niederfahrenhorst
306853b5b8
[RLlib] Issue 22693: RNN-SAC fixes. (#23814) 2022-04-25 09:19:24 +02:00
Ben Kasper
531fdd50d4
[RLlib] Add 2 missing callbacks to MultiCallbacks class (on_trainer_init and on_sub_environment_created) (#24153) 2022-04-25 09:18:03 +02:00
Qing Wang
a7a6465936
[Ray Collective] Fix the incorrect Redis password issue. (#24111)
This PR fixes the issue that we are not able to use GLOO as collective lib for the Ray cluster which is set Redis password.
2022-04-24 16:23:41 +08:00
Yi Cheng
f1a1f97992
Revert "[grpc] Upgrade grpc to 1.45.2 (#24064)" (#24145)
This reverts commit 3c0a3f4cc1.
2022-04-23 23:47:11 -07:00
Siyuan (Ryans) Zhuang
e507780c3b
[serve] Remove unnecessary code (#24131)
* cleanup
2022-04-23 23:29:59 -07:00
ZhuSenlin
0196694629
Fix the failure of sort_main in the case of num_cpus > 1 and not an integer (#24099)
The exception of 'ValueError("Resource quantities >1 must be whole numbers.")' will be raised if the `num_cpus` > 1 and not an integer.

Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
2022-04-24 11:54:08 +08:00
Travis Addair
c64afc672e
[train] Copy resources_per_worker to avoid modifying user input 2022-04-23 15:01:35 -07:00
Kai Fricke
d161831f0e
[RLlib; testing] Deactivate flaky alpha star learning test (#24138) 2022-04-23 17:45:58 +02:00
Kai Fricke
03601007c9
[air] Use checkpoint.as_directory() instead of cleaning up manually (#24113)
Follow-up from #23908

Instead of manually deleting checkpoint paths after calling `to_directory()`, we should utilize `Checkpoint.as_directory()` when possible.
2022-04-23 14:52:30 +01:00
Yi Cheng
3c0a3f4cc1
[grpc] Upgrade grpc to 1.45.2 (#24064)
Upgrade grpc to the newest version to use grpc internal implementation of retry.
2022-04-22 19:15:15 -07:00
Clark Zinzow
ea791ab0a0
[Datasets] Print hierarchical stats for multi-stage operations. (#24119)
The total execution time for multi-stage operations being logged twice in the dataset stats is [confusing to users](https://github.com/ray-project/ray/issues/23915), making it seem like each stage in the operation took the same amount of time. This PR modifies the stats output for multi-stage operations, such that the total execution time is printed out once as a top-level op stats line, with the stats for each of the (sub)stages indented and devoid of the total execution time repeat.

This also opens the door for other op-level stats (e.g. peak memory utilization) and per-substage stats (e.g. total substage execution time).
2022-04-22 16:33:11 -07:00
Clark Zinzow
9ee24530ab
[Datasets] [Out-of-Band Serialization: 2/3] Refactor ExecutionPlan to maintain complete lineage and eagerly unlink block references. (#23931)
This PR refactors ExecutionPlan to maintain complete stage lineage, even for eagerly computed datasets, while ensuring that block references are unlinked as early as possible in order to more eagerly release block memory. This PR is the final precursor to adding the actual out-of-band serialization APIs (PR 3/3).

The fully lineage has to be maintained, even for eagerly computed datasets, since the lineage is needed for out-of-band serialization of datasets.
2022-04-22 16:07:24 -07:00
SangBin Cho
73ed67e9e6
[State API] State api limit + Removing unnecessary modules (#24098)
This PR does

Move all routes into the same module, state_head.py
Support a limit feature.
2022-04-22 15:59:46 -07:00
Patrick Ames
9f4cb9b3c9
[Datasets] Add Path Partitioning Support for All Content Types (#23624)
Adds a content-type-agnostic partition parser with support for filtering files. Also adds some corner-case bug fixes and usability improvements for supporting more robust input path types.
2022-04-22 15:48:31 -07:00
Avnish Narayan
6e68b6bef9
[RLlib] DD-PPO training iteration fn. (#24118)
We had unreported merge conflicts with DDPPO. This PR closes and combines #24092, #24035, #24030 and #23096

Co-authored-by: sven1977 <svenmika1977@gmail.com>
2022-04-22 15:22:14 -07:00
xwjiang2010
d7da0d706e
[rllib] Only conditionally import JaxCategorical in catalog.py (#24086)
* Experiment with less imports in catalog.py

* lint
2022-04-22 14:51:35 -07:00
Chen Shen
1d981e0cf1
[doc] fix /cluster/config.html #23720
closes #23560
2022-04-22 10:13:12 -07:00
Avnish Narayan
3bf907bcf8
[RLlib] Don't modify environments via the env checker utilities. (#24083) 2022-04-22 18:39:47 +02:00
Kai Fricke
bb341eb1e4
Revert "Revert "[tune] Also interrupt training when SIGUSR1 received"" (#24101)
* Revert "Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085)"

This reverts commit 00595653ed.

Failure in windows has been addressed by conditionally registering the signal handler if available.
2022-04-22 11:27:38 +01:00
Kai Fricke
0e2dd40451
[tune] reuse_actors per default for function trainables (#24040)
Function trainables don't carry state, so they should be reused per default for performance optimization.
2022-04-22 10:53:54 +01:00
Kai Fricke
9f7170e444
Revert "Revert revert #23906 [RLlib] DD-PPO training iteration function implementation. (#24035)" (#24103)
This reverts commit a337fd994e.
2022-04-22 09:58:58 +01:00
jon-chuang
e6a458a31e
[CI] Create zip of ray session_latest/logs dir on test failure and upload to buildkite via /artifact-mount (#23783)
Creates a zip of session_latest dir with test name and timestamp upon python test failure. Writes to dir specified by env var `RAY_TEST_FAILURE_LOGS_DIR`. Noop if env var does not exist.

Downstream consumer (e.g. CI) can upload all created artifacts in this dir. Thereby, PR submitters can more easily debug their CI failures, especially if they can't repro locally.

Limitations:
- a conftest.py file importing the main ray conftest.py needs to be present in same dir as test. This presents a challenge for e.g. dashboard tests which are highly scattered
2022-04-22 09:48:53 +01:00
Chong-Li
1807cff9b6
Replace the legacy ResourceSet & SchedulingResources at Raylet (#23173) 2022-04-22 14:46:38 +08:00
SangBin Cho
30ab5458a7
[State Observability] Tasks and Objects API (#23912)
This PR implements ray list tasks and ray list objects APIs.

NOTE: You can ignore the merge conflict for now. It is because the first PR was reverted. There's a fix PR open now.
2022-04-21 18:45:03 -07:00
Amog Kamsetty
f500997a65
[AIR] GNN example cleanup (#24080)
Minor cleanup for GNN example
2022-04-21 17:00:31 -07:00
shrekris-anyscale
b51d0aa8b1
[serve] Introduce context.py and client.py (#24067)
Serve stores context state, including the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` in `api.py`. However, these data structures are referenced throughout the codebase, causing circular dependencies. This change introduces two new files:

* `context.py`
    * Intended to expose process-wide state to internal Serve code as well as `api.py`
    * Stores the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` global variables
* `client.py`
    * Stores the definition for the Serve `Client` object, now called the `ServeControllerClient`
2022-04-21 18:35:09 -05:00
Dmitri Gekhtman
8c5fe44542
[KubeRay] Fix autoscaling with GPUs and custom resources, with e2e tests (#23883)
- Closes #23874 by fixing a typo ("num_gpus" -> "num-gpus").
- Adds end-to-end test logic confirming the fix.
- Adds end-to-end test logic confirming autoscaling with custom resources works.
- Slightly refines developer instructions.
- Deflakes test logic a bit by allowing for the event that the head pod changes its identity as the Ray cluster starts up.
2022-04-21 14:54:37 -07:00
xwjiang2010
00595653ed
Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085) 2022-04-21 13:27:34 -07:00
iasoon
c9f0e486ad
[Serve] ensure replica reconfigure runs after allocation check (#24052)
Since remote calls provide no ordering guarantees, it could happen that `reconfigure` gets called before `is_allocated` Since `reconfigure` then runs the user initialization code, the replica actor could get blocked and never provide its allocation check.
This PR ensures that the allocation proof has been received before we run the replica initialization.
2022-04-21 15:24:21 -05:00
Jiao
f0071d30fb
[Serve][Deployment Graph] Let .bind return ray DAGNode types and remove exposing DeploymentNode as public (#24065)
See dag layering summary in https://github.com/ray-project/ray/issues/24061

We need to cleanup and set right ray dag -> serve dag layering where `.bind()` can be called on `@serve.deployment` decorated class or func, but only returns raw Ray DAGNode type, executable by ray core and serve_dag is only available after serve-specific transformations.

Thus this PR removes exposed serve DAGNode type such as DeploymentNode.

It also removes the syntax of `class.bind().bind()` to return a `DeploymentMethodNode` that defaults to `__call__` to match same behavior in ray dag building.
2022-04-21 11:48:48 -07:00
Kai Fricke
238a607f51
[air] Tuner should use run_config from Trainer per default (#24079)
When a `Trainer` is initialized with a run config and then passed into a `Tuner`, it is currently silently discarded and a default RunConfig is used. Instead we should use the run config in trainer if not overridden.
2022-04-21 19:42:57 +01:00
Ian Rodney
0c16bbd245
[AWS] Abort if AZs & SubnetIds mismatch (#22001)
If a user simultaneously selects AZs to use & specifies Subnets not in those AZs, raise an error!
2022-04-21 11:07:59 -07:00
mwtian
02b0d82cf8
[Ray client] return None from internal KV for non-existent keys (#24058)
This fixes the behavior diff between client and non-client internal KV.
2022-04-21 10:55:57 -07:00
Grzegorz Rypeść
dfb9689701
[RLlib] Issue 21489: Unity3D env lacks group rewards (#24016). 2022-04-21 18:49:52 +02:00
Amog Kamsetty
732175e245
[AIR] Add distributed torch_geometric example (#23580)
Add example for distributed pytorch geometric (graph learning) with Ray AIR

This only showcases distributed training, but with data small enough that it can be loaded in by each training worker individually. Distributed data ingest is out of scope for this PR.

Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-04-21 09:48:43 -07:00
Zyiqin-Miranda
e4a66c0e2e
[doc] Add CloudWatch integration documentation (#22638)
This PR adds documentation for Ray CloudWatch integration.
2022-04-21 09:44:41 -07:00
Avnish Narayan
a337fd994e
Revert revert #23906 [RLlib] DD-PPO training iteration function implementation. (#24035) 2022-04-21 17:37:49 +02:00
Qing Wang
c5252c5ceb
[Java] Support parallel actor in experimental. (#21701)
For the purpose to provide an alternative option for running multiple actor instances in a Java worker process, and the eventual goal is to remove the original multi-worker-instances in one worker process implementation.  we're proposing supporting parallel actor concept in Java. This feature enables that users could define some homogeneous parallel execution instances in an actor, and all instances hold one thread as the execution backend.

### Introduction

For the following example, we define a parallel actor with  10 parallelism. The backend actor has 10 concurrency groups for the parallel executions, it also means there're 10 threads for that.

We can access the instance by the instance handle, like:
```java
ParallelActorHandle<A> actor = ParallelActor.actor(A::new).setParallelism(10).remote();
ParallelInstance<A> instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 1000000).remote()); // print 1000000           

instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 3000000

instance = actor.getInstance(/*index=*/ 3);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 2000000
```


### Limitation
- It doesn't support concurrency group on a parallel actor yet.

Co-authored-by: Kai Yang <kfstorm@outlook.com>
2022-04-21 22:54:33 +08:00
Kai Fricke
f376dd8902
[tune] Also interrupt training when SIGUSR1 received (#24015)
Ray Tune currently gracefully stops training on SIGINT. However, the Ray core worker prevents SIGINT (and SIGTERM) to be processed by child tasks, which means that Ray Tune runs that are started in remote tasks (e.g. via Ray client) cannot be gracefully interrupted.

In k8s-based cloud tests that used the Ray client to kick off a Ray Tune run, this lead to test flakiness, as final experiment state could not be gracefully persisted to cloud storage.

This PR adds support for SIGUSR1 in addition to SIGINT to interrupt training gracefully.
2022-04-21 13:07:29 +01:00