Follow-up from #23908
Instead of manually deleting checkpoint paths after calling `to_directory()`, we should utilize `Checkpoint.as_directory()` when possible.
The total execution time for multi-stage operations being logged twice in the dataset stats is [confusing to users](https://github.com/ray-project/ray/issues/23915), making it seem like each stage in the operation took the same amount of time. This PR modifies the stats output for multi-stage operations, such that the total execution time is printed out once as a top-level op stats line, with the stats for each of the (sub)stages indented and devoid of the total execution time repeat.
This also opens the door for other op-level stats (e.g. peak memory utilization) and per-substage stats (e.g. total substage execution time).
This PR refactors ExecutionPlan to maintain complete stage lineage, even for eagerly computed datasets, while ensuring that block references are unlinked as early as possible in order to more eagerly release block memory. This PR is the final precursor to adding the actual out-of-band serialization APIs (PR 3/3).
The fully lineage has to be maintained, even for eagerly computed datasets, since the lineage is needed for out-of-band serialization of datasets.
Adds a content-type-agnostic partition parser with support for filtering files. Also adds some corner-case bug fixes and usability improvements for supporting more robust input path types.
We had unreported merge conflicts with DDPPO. This PR closes and combines #24092, #24035, #24030 and #23096
Co-authored-by: sven1977 <svenmika1977@gmail.com>
* Revert "Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085)"
This reverts commit 00595653ed.
Failure in windows has been addressed by conditionally registering the signal handler if available.
Creates a zip of session_latest dir with test name and timestamp upon python test failure. Writes to dir specified by env var `RAY_TEST_FAILURE_LOGS_DIR`. Noop if env var does not exist.
Downstream consumer (e.g. CI) can upload all created artifacts in this dir. Thereby, PR submitters can more easily debug their CI failures, especially if they can't repro locally.
Limitations:
- a conftest.py file importing the main ray conftest.py needs to be present in same dir as test. This presents a challenge for e.g. dashboard tests which are highly scattered
This PR implements ray list tasks and ray list objects APIs.
NOTE: You can ignore the merge conflict for now. It is because the first PR was reverted. There's a fix PR open now.
Serve stores context state, including the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` in `api.py`. However, these data structures are referenced throughout the codebase, causing circular dependencies. This change introduces two new files:
* `context.py`
* Intended to expose process-wide state to internal Serve code as well as `api.py`
* Stores the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` global variables
* `client.py`
* Stores the definition for the Serve `Client` object, now called the `ServeControllerClient`
- Closes#23874 by fixing a typo ("num_gpus" -> "num-gpus").
- Adds end-to-end test logic confirming the fix.
- Adds end-to-end test logic confirming autoscaling with custom resources works.
- Slightly refines developer instructions.
- Deflakes test logic a bit by allowing for the event that the head pod changes its identity as the Ray cluster starts up.
Since remote calls provide no ordering guarantees, it could happen that `reconfigure` gets called before `is_allocated` Since `reconfigure` then runs the user initialization code, the replica actor could get blocked and never provide its allocation check.
This PR ensures that the allocation proof has been received before we run the replica initialization.
See dag layering summary in https://github.com/ray-project/ray/issues/24061
We need to cleanup and set right ray dag -> serve dag layering where `.bind()` can be called on `@serve.deployment` decorated class or func, but only returns raw Ray DAGNode type, executable by ray core and serve_dag is only available after serve-specific transformations.
Thus this PR removes exposed serve DAGNode type such as DeploymentNode.
It also removes the syntax of `class.bind().bind()` to return a `DeploymentMethodNode` that defaults to `__call__` to match same behavior in ray dag building.
When a `Trainer` is initialized with a run config and then passed into a `Tuner`, it is currently silently discarded and a default RunConfig is used. Instead we should use the run config in trainer if not overridden.
Add example for distributed pytorch geometric (graph learning) with Ray AIR
This only showcases distributed training, but with data small enough that it can be loaded in by each training worker individually. Distributed data ingest is out of scope for this PR.
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
For the purpose to provide an alternative option for running multiple actor instances in a Java worker process, and the eventual goal is to remove the original multi-worker-instances in one worker process implementation. we're proposing supporting parallel actor concept in Java. This feature enables that users could define some homogeneous parallel execution instances in an actor, and all instances hold one thread as the execution backend.
### Introduction
For the following example, we define a parallel actor with 10 parallelism. The backend actor has 10 concurrency groups for the parallel executions, it also means there're 10 threads for that.
We can access the instance by the instance handle, like:
```java
ParallelActorHandle<A> actor = ParallelActor.actor(A::new).setParallelism(10).remote();
ParallelInstance<A> instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 1000000).remote()); // print 1000000
instance = actor.getInstance(/*index=*/ 2);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 3000000
instance = actor.getInstance(/*index=*/ 3);
Preconditions.checkNotNull(instance);
Ray.get(instance.task(A::incr, 2000000).remote().get()); // print 2000000
```
### Limitation
- It doesn't support concurrency group on a parallel actor yet.
Co-authored-by: Kai Yang <kfstorm@outlook.com>
Ray Tune currently gracefully stops training on SIGINT. However, the Ray core worker prevents SIGINT (and SIGTERM) to be processed by child tasks, which means that Ray Tune runs that are started in remote tasks (e.g. via Ray client) cannot be gracefully interrupted.
In k8s-based cloud tests that used the Ray client to kick off a Ray Tune run, this lead to test flakiness, as final experiment state could not be gracefully persisted to cloud storage.
This PR adds support for SIGUSR1 in addition to SIGINT to interrupt training gracefully.
In xgboost 1.6, support for older GPU architectures was removed (dmlc/xgboost#7767).
This PR updates the instance types used in our xgboost-ray gpu release tests to use Volta GPUs instead of Kepler GPUs so that xgboost-ray can run successfully with xgboost v1.6.
Closes#24048
`test_cluster: test_replica_startup_status_transitions` is periodically flaky with the replica hanging in `PENDING_ALLOCATION`. This could be because there is no ordering guarantee on async actor calls, so the `reconfigure` method might execute first and block the asyncio loop (due to `ray.get`), not allowing the `is_allocated` call to run.
This PR focuses on updating syncer-related code and comments from this #23660 to reduce the code size.
Update Snapshot/Update -> CreateSyncMessage/ConsumeSyncMessage
Make ray syncer test work even when we add more components in the protobuf
Make ray syncer able to reconnect to a new node.
Closes#23503
We are fixing two issue here:
1. The unified controller API used pickle to pack the init args, we are changing it to cloudpickle for now. (this is something I missed during code review)
2. The checkpoint state functionality in controller uses pickle to prevent ray cluster specific state written to checkpoint and unable to recover in a fresh new cluster. However, this recover from new cluster is not good UX and we should prefer an end to end solution like resubmitting via REST API.
As a corollary, the deployment state manager should not care about deserializing replica config and init args. Rather, it should just pass the protobuf directly to replica. I can do that either here or as a follow up.
`set_start_time()` was not implemented for the progress reporter base class, but it's called in `tune.run()`.
Instead of adding new methods to set runtime arguments, this PR moves to a singular and forward-compatible `setup()` method that defaults to no-op. This way custom reporters can make use of runtime information passed to the reporter, but can choose to ignore it per default.
Previously we have double dump behavior that makes json serde not human readable or friendly, but it's required given `DAGDriver` takes `dag_node_json` as first arg and it will appear in YAML.
This PR removes extra `json.dumps()` in encoder path, eliminated and simplified most encoder / object_hooks that are not needed in the first place to make everything simpler again.
Sample YAML now for a complex DAG: https://gist.github.com/jiaodong/32991771e9d78c35767eb24ed73f8236
We're pretty close to have a better minimal JSON representation of the whole dag after this. I might include in this PR or separate one.
Several changes to make spread scheduling work better under load:
* When nodes are not available, spread among feasible nodes.
* If grant_or_reject is true, don't spill back if the selected node is not available.
* Don't spill due to waiting for dependencies for spread tasks.
`gcsfs` complains about an invalid `create_parents` argument when using google cloud storage with cloud checkpoints. Thus we should use an alternative fs spec handler that omits this argument for gs.
The root issue will be fixed here: https://github.com/fsspec/gcsfs/pull/471