The postprocess checkpoint method was introduced to be able to add data to function runner checkpoint directories before they are uploaded to external (cloud) storage. Instead, we should just use the existing separation of `save_checkpoint()` and `save()`.
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.
This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.
Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.
Redo for #23758 to fix CI.
For tasks with node affinity scheduling strategy, the resource demands shouldn't create new nodes. This PR achieves this by not reporting demand to autoscaler. In the future, we will explore sending scheduling strategy information to autoscaler.
To add basic plotting feature for Ray DAGs.
`ray.experimental.dag.plot(dag: DAGNode, to_file=None)`
### Behavior
1. dump the dag plot (Dot) to file.
2. also render the image whenever possible. E.g. if running in Jupyter notebook, the image will not only be saved, but also rendered in the notebook.
3. when to_file is not set (i.e. None), it will be saved to a tempfile for rendering purpose only. This is common when users plot DAGs in notebook env to explore the DAG structure without wanting to save it to a file.
Refactors _get_unique_value_indices (used in Encoder preprocessors) for much improved performance with multiple columns. Also uses the same, more robust intermediary dataset format in _get_most_frequent_values (Imputers).
The existing unit tests pass, and no functionality has been changed.
RayDP has updated their code and tests can be re-enabled now.
In addition, we want to support ray client in raydp dataset operation. Right now, if users want to do dataset.to_spark(spark) while using ray client, it will immediately fail because the local ray worker is not connected. By wrapping it in a function decorated with @client_mode_wrap, It works well no matter ray client is used or not.
Lineage-based serialization isn't supported for fan-in operations such as unions and zips. This PR adds documentation indicating as much, and ensures that a good error message is raised.
When using the actor compute model for batch mapping (e.g. in batch inference), map tasks are often blocked waiting for their dependencies to be fetched since we submit one actor task at a time. This commit changes the default behavior of the actor compute model to have up to two actor tasks in flight for each actor in order to better pipeline task dependency fetching with the actual compute.
This "max tasks in flight per actor worker" is also made configurable, in case a particular use case warrants more aggressive pipelining (e.g. big blocks and/or fast maps) or more conservative pipelining (e.g. small data or slow maps).
Refactors Dataset splitting to make it less hacky and address the TODO. Also makes Dataset ingest in general configurable for Ray Train. This is an internal only change for now, but will set the stage for the proposed ingest API
Customizable ingest for GBDT Trainers is out of scope for this PR.
Referencing the DatasetPipeline class currently requires ray.data.dataset_pipeline.DatasetPipeline; we should expose it directly in the ray.data module, as we do for Dataset.
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.
This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.
Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.
Related issue number
Closes#23758.
In the current code base, `multiprocessing.Pool.imap_unordered` fails when it is called with an iterator (for which the length is not known on the first call). For example, the following code would fail:
```
import ray.util.multiprocessing as raymp
# test function
def func(input):
print('run func [{}]'.format(input))
return input
with raymp.Pool() as pool:
# this fails with a TypeError (could not serialize)
print('use an iterator')
for x in pool.imap_unordered(func, iter(range(5))):
print('Finished [{}]'.format(x))
```
## Summary of changes
* I made changes to the `ResultThread` class that enable it to work with argument `total_object_refs=0`. This will let it run until a call to `stop()` is received.
* I have adapted the `IMapIterator` class to better check input arguments and distinguish between iterables and iterators.
* The super classes `OrderedIMapIterator` and `UnorderedIMapIterator` have been updated to stop appropriately when iterators are used, and explicitly stop the `_result_thread`.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
This adds the RLPredictor implementation as the counter part for the RLTrainer. An evaluation using the predictor was added to the rl trainer end to end example.
Adds a new flag `stop_last_trials` to AsyncHyperband that allows the last trials of each bracket to continue training after `max_t`. This feature existed for synchronous hyperband before, and the extension had been requested in #14235.
This PR moves function import to a lazy way. Several benefits of this:
- worker start up is faster since it doesn't need to go through all functions exported
- gcs pressure is smaller since 1) we don't need to export key and 2) all loads are done when needed.
- get rid of function table channel
Previously, the `TimeoutStopper` did not work after recovery from checkpoints in the future, as the start time + budget was exceeded. Instead, we're now tracking a timeout budget that gets decreased and properly saved in checkpoints, so that recovery in the future works.
It is sometimes hard to find all failing tests in buildkite output logs - even filtering for "FAILED" is cumbersome as the output can be overloaded. This PR adds a small utility to add a short summary log in a separate output section at the end of the buildkite job.
The only shared directory between the Buildkite host machine and the test docker container is `/tmp/artifacts:/artifact-mount`. Thus, we write the summary file to this directory, and delete it before actually uploading it as an artifact in the `post-commands` hook.
ray.train.Trainer and ray.tune.integration.*.DistributedTrainableCreator will be deprecated in Ray 2.0 in favor of Ray AIR. In Ray 1.13, we should warn about this pending deprecation.
First step towards #23014
This PR adds basic stats instrumentation of split_at_indices(), the first stage in fully instrumenting split operations. See https://github.com/ray-project/ray/issues/24178 for future steps.
Rolling out next deprecation cycle:
- DeprecationWarnings that were `warnings.warn` or `logger.warn` before are now raised errors
- Raised Deprecation warnings are now removed
- Notably, this involves deprecating the TrialCheckpoint functionality and associated cloud tests
- Added annotations to deprecation warning for when to fully remove
Ray SGD v1 has been denoted as a deprecated API for a while. This PR fully deprecates Ray SGD v1. An error will be raised if ray.util.sgd package is attempted to be imported.
Closes#16435
Show usage stats prompt when it's enabled.
Current UX are:
* The usage stats enabled or disabled message is shown every time in both terminal and dashboard.
* If users don't explicitly enable or disable usage stats, the first time they start a ray cluster interactively, they will be asked to confirm and will enable if no user action within 10s. If it's non-interactive, collection is enabled by default without confirmation.
* ray.init() doesn't collect usage stats
* Usage stats can be disabled via three approaches: 1. RAY_USAGE_STATS_ENABLED env var, 2. ray xxx --disable-usage-stats, 3. ray disable-usage-stats
Commit 2cf4c72 ("[ray client] Fix ctrl-c for ray.get() by setting a
short-server side timeout") introduced a short server-side timeout not
to block later operations.
However, the fix implicitly assumes that get() is complete within
MAX_BLOCKING_OPERATION_TIME_S (two seconds). This becomes a problem
when apps use heavy objects or limited network I/O bandwidth that
require more than two seconds to push all chunks. The current retry
logic needs to re-push from the beginning of chunks and block clients
with the infinite re-push.
I updated the logic to directly pass timeout if it is explicitly given.
Without timeout, it still uses MAX_BLOCKING_OPERATION_TIME_S for
polling with the short server-side timeout.
This PR adds support for out-of-band serialization of datasets, i.e. for serializing and deserializing datasets across Ray clusters by serializing the dataset lineage. This PR is the final PR in a set to add such support (3/3).
Our current behavior is dropping all args / kwargs for both http and python, if user deployment function doesn't take any input. But in the meantime we didn't throw anything if user tries to invoke the function in python with actual args.
This PR adds this back, and added a bit special handling for http case with in-line comments.
Now given we directly return `ClassMethodNode` on `deployment_cls.bind()`, add a test to ensure chain of ClassMethod calls is consistent across ray dag and serve dag.
Note this only works on single replica, since if the class method mutates replica state, and there're multiple replicas running, replica states / result won't be consistent if request are routed to different ones.
Users may want to provide Ray task option overrides for write tasks, e.g. having write tasks retried on application-level exceptions (retry_exceptions=True) or change the default number of retries (max_retries=8). This commit adds support for providing such task options for write tasks.
The exception of 'ValueError("Resource quantities >1 must be whole numbers.")' will be raised if the `num_cpus` > 1 and not an integer.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Follow-up from #23908
Instead of manually deleting checkpoint paths after calling `to_directory()`, we should utilize `Checkpoint.as_directory()` when possible.