Referencing the DatasetPipeline class currently requires ray.data.dataset_pipeline.DatasetPipeline; we should expose it directly in the ray.data module, as we do for Dataset.
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.
This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.
Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.
Related issue number
Closes#23758.
In the current code base, `multiprocessing.Pool.imap_unordered` fails when it is called with an iterator (for which the length is not known on the first call). For example, the following code would fail:
```
import ray.util.multiprocessing as raymp
# test function
def func(input):
print('run func [{}]'.format(input))
return input
with raymp.Pool() as pool:
# this fails with a TypeError (could not serialize)
print('use an iterator')
for x in pool.imap_unordered(func, iter(range(5))):
print('Finished [{}]'.format(x))
```
## Summary of changes
* I made changes to the `ResultThread` class that enable it to work with argument `total_object_refs=0`. This will let it run until a call to `stop()` is received.
* I have adapted the `IMapIterator` class to better check input arguments and distinguish between iterables and iterators.
* The super classes `OrderedIMapIterator` and `UnorderedIMapIterator` have been updated to stop appropriately when iterators are used, and explicitly stop the `_result_thread`.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
This adds the RLPredictor implementation as the counter part for the RLTrainer. An evaluation using the predictor was added to the rl trainer end to end example.
Adds a new flag `stop_last_trials` to AsyncHyperband that allows the last trials of each bracket to continue training after `max_t`. This feature existed for synchronous hyperband before, and the extension had been requested in #14235.
This PR moves function import to a lazy way. Several benefits of this:
- worker start up is faster since it doesn't need to go through all functions exported
- gcs pressure is smaller since 1) we don't need to export key and 2) all loads are done when needed.
- get rid of function table channel
Previously, the `TimeoutStopper` did not work after recovery from checkpoints in the future, as the start time + budget was exceeded. Instead, we're now tracking a timeout budget that gets decreased and properly saved in checkpoints, so that recovery in the future works.
It is sometimes hard to find all failing tests in buildkite output logs - even filtering for "FAILED" is cumbersome as the output can be overloaded. This PR adds a small utility to add a short summary log in a separate output section at the end of the buildkite job.
The only shared directory between the Buildkite host machine and the test docker container is `/tmp/artifacts:/artifact-mount`. Thus, we write the summary file to this directory, and delete it before actually uploading it as an artifact in the `post-commands` hook.
ray.train.Trainer and ray.tune.integration.*.DistributedTrainableCreator will be deprecated in Ray 2.0 in favor of Ray AIR. In Ray 1.13, we should warn about this pending deprecation.
First step towards #23014
This PR adds basic stats instrumentation of split_at_indices(), the first stage in fully instrumenting split operations. See https://github.com/ray-project/ray/issues/24178 for future steps.
Rolling out next deprecation cycle:
- DeprecationWarnings that were `warnings.warn` or `logger.warn` before are now raised errors
- Raised Deprecation warnings are now removed
- Notably, this involves deprecating the TrialCheckpoint functionality and associated cloud tests
- Added annotations to deprecation warning for when to fully remove
Ray SGD v1 has been denoted as a deprecated API for a while. This PR fully deprecates Ray SGD v1. An error will be raised if ray.util.sgd package is attempted to be imported.
Closes#16435
Show usage stats prompt when it's enabled.
Current UX are:
* The usage stats enabled or disabled message is shown every time in both terminal and dashboard.
* If users don't explicitly enable or disable usage stats, the first time they start a ray cluster interactively, they will be asked to confirm and will enable if no user action within 10s. If it's non-interactive, collection is enabled by default without confirmation.
* ray.init() doesn't collect usage stats
* Usage stats can be disabled via three approaches: 1. RAY_USAGE_STATS_ENABLED env var, 2. ray xxx --disable-usage-stats, 3. ray disable-usage-stats
Commit 2cf4c72 ("[ray client] Fix ctrl-c for ray.get() by setting a
short-server side timeout") introduced a short server-side timeout not
to block later operations.
However, the fix implicitly assumes that get() is complete within
MAX_BLOCKING_OPERATION_TIME_S (two seconds). This becomes a problem
when apps use heavy objects or limited network I/O bandwidth that
require more than two seconds to push all chunks. The current retry
logic needs to re-push from the beginning of chunks and block clients
with the infinite re-push.
I updated the logic to directly pass timeout if it is explicitly given.
Without timeout, it still uses MAX_BLOCKING_OPERATION_TIME_S for
polling with the short server-side timeout.
This PR adds support for out-of-band serialization of datasets, i.e. for serializing and deserializing datasets across Ray clusters by serializing the dataset lineage. This PR is the final PR in a set to add such support (3/3).
Our current behavior is dropping all args / kwargs for both http and python, if user deployment function doesn't take any input. But in the meantime we didn't throw anything if user tries to invoke the function in python with actual args.
This PR adds this back, and added a bit special handling for http case with in-line comments.
Now given we directly return `ClassMethodNode` on `deployment_cls.bind()`, add a test to ensure chain of ClassMethod calls is consistent across ray dag and serve dag.
Note this only works on single replica, since if the class method mutates replica state, and there're multiple replicas running, replica states / result won't be consistent if request are routed to different ones.
Users may want to provide Ray task option overrides for write tasks, e.g. having write tasks retried on application-level exceptions (retry_exceptions=True) or change the default number of retries (max_retries=8). This commit adds support for providing such task options for write tasks.
The exception of 'ValueError("Resource quantities >1 must be whole numbers.")' will be raised if the `num_cpus` > 1 and not an integer.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Follow-up from #23908
Instead of manually deleting checkpoint paths after calling `to_directory()`, we should utilize `Checkpoint.as_directory()` when possible.
The total execution time for multi-stage operations being logged twice in the dataset stats is [confusing to users](https://github.com/ray-project/ray/issues/23915), making it seem like each stage in the operation took the same amount of time. This PR modifies the stats output for multi-stage operations, such that the total execution time is printed out once as a top-level op stats line, with the stats for each of the (sub)stages indented and devoid of the total execution time repeat.
This also opens the door for other op-level stats (e.g. peak memory utilization) and per-substage stats (e.g. total substage execution time).
This PR refactors ExecutionPlan to maintain complete stage lineage, even for eagerly computed datasets, while ensuring that block references are unlinked as early as possible in order to more eagerly release block memory. This PR is the final precursor to adding the actual out-of-band serialization APIs (PR 3/3).
The fully lineage has to be maintained, even for eagerly computed datasets, since the lineage is needed for out-of-band serialization of datasets.
Adds a content-type-agnostic partition parser with support for filtering files. Also adds some corner-case bug fixes and usability improvements for supporting more robust input path types.
* Revert "Revert "[tune] Also interrupt training when SIGUSR1 received" (#24085)"
This reverts commit 00595653ed.
Failure in windows has been addressed by conditionally registering the signal handler if available.
Creates a zip of session_latest dir with test name and timestamp upon python test failure. Writes to dir specified by env var `RAY_TEST_FAILURE_LOGS_DIR`. Noop if env var does not exist.
Downstream consumer (e.g. CI) can upload all created artifacts in this dir. Thereby, PR submitters can more easily debug their CI failures, especially if they can't repro locally.
Limitations:
- a conftest.py file importing the main ray conftest.py needs to be present in same dir as test. This presents a challenge for e.g. dashboard tests which are highly scattered
This PR implements ray list tasks and ray list objects APIs.
NOTE: You can ignore the merge conflict for now. It is because the first PR was reverted. There's a fix PR open now.
Serve stores context state, including the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` in `api.py`. However, these data structures are referenced throughout the codebase, causing circular dependencies. This change introduces two new files:
* `context.py`
* Intended to expose process-wide state to internal Serve code as well as `api.py`
* Stores the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` global variables
* `client.py`
* Stores the definition for the Serve `Client` object, now called the `ServeControllerClient`
- Closes#23874 by fixing a typo ("num_gpus" -> "num-gpus").
- Adds end-to-end test logic confirming the fix.
- Adds end-to-end test logic confirming autoscaling with custom resources works.
- Slightly refines developer instructions.
- Deflakes test logic a bit by allowing for the event that the head pod changes its identity as the Ray cluster starts up.
Since remote calls provide no ordering guarantees, it could happen that `reconfigure` gets called before `is_allocated` Since `reconfigure` then runs the user initialization code, the replica actor could get blocked and never provide its allocation check.
This PR ensures that the allocation proof has been received before we run the replica initialization.