RayDP has updated their code and tests can be re-enabled now.
In addition, we want to support ray client in raydp dataset operation. Right now, if users want to do dataset.to_spark(spark) while using ray client, it will immediately fail because the local ray worker is not connected. By wrapping it in a function decorated with @client_mode_wrap, It works well no matter ray client is used or not.
Lineage-based serialization isn't supported for fan-in operations such as unions and zips. This PR adds documentation indicating as much, and ensures that a good error message is raised.
When using the actor compute model for batch mapping (e.g. in batch inference), map tasks are often blocked waiting for their dependencies to be fetched since we submit one actor task at a time. This commit changes the default behavior of the actor compute model to have up to two actor tasks in flight for each actor in order to better pipeline task dependency fetching with the actual compute.
This "max tasks in flight per actor worker" is also made configurable, in case a particular use case warrants more aggressive pipelining (e.g. big blocks and/or fast maps) or more conservative pipelining (e.g. small data or slow maps).
Refactors Dataset splitting to make it less hacky and address the TODO. Also makes Dataset ingest in general configurable for Ray Train. This is an internal only change for now, but will set the stage for the proposed ingest API
Customizable ingest for GBDT Trainers is out of scope for this PR.
Currently concurrency groups are always calculated based on the full test cluster compute. Instead, smoke tests should use the smoke test cluster compute.
Referencing the DatasetPipeline class currently requires ray.data.dataset_pipeline.DatasetPipeline; we should expose it directly in the ray.data module, as we do for Dataset.
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.
This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.
Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.
Related issue number
Closes#23758.
Use a separate compute config that uses smaller instance types and no object store memory limit for the new shuffle implementation. I verified that the config works on master for dataset_shuffle_* tests.
Related issue number
#24176: the added tests would verify the instance types which support the new shuffle implementations.
Resource load has different pattern compared with resource reporting. It's a simple rpc to gcs and gcs will do aggregation and later autoscaler will call gcs rpc to get the aggregated result.
This PR moves the load to another rpc which will simplify the syncer api.
In the current code base, `multiprocessing.Pool.imap_unordered` fails when it is called with an iterator (for which the length is not known on the first call). For example, the following code would fail:
```
import ray.util.multiprocessing as raymp
# test function
def func(input):
print('run func [{}]'.format(input))
return input
with raymp.Pool() as pool:
# this fails with a TypeError (could not serialize)
print('use an iterator')
for x in pool.imap_unordered(func, iter(range(5))):
print('Finished [{}]'.format(x))
```
## Summary of changes
* I made changes to the `ResultThread` class that enable it to work with argument `total_object_refs=0`. This will let it run until a call to `stop()` is received.
* I have adapted the `IMapIterator` class to better check input arguments and distinguish between iterables and iterators.
* The super classes `OrderedIMapIterator` and `UnorderedIMapIterator` have been updated to stop appropriately when iterators are used, and explicitly stop the `_result_thread`.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
This adds the RLPredictor implementation as the counter part for the RLTrainer. An evaluation using the predictor was added to the rl trainer end to end example.
Adds a new flag `stop_last_trials` to AsyncHyperband that allows the last trials of each bracket to continue training after `max_t`. This feature existed for synchronous hyperband before, and the extension had been requested in #14235.
This PR moves function import to a lazy way. Several benefits of this:
- worker start up is faster since it doesn't need to go through all functions exported
- gcs pressure is smaller since 1) we don't need to export key and 2) all loads are done when needed.
- get rid of function table channel
Previously, the `TimeoutStopper` did not work after recovery from checkpoints in the future, as the start time + budget was exceeded. Instead, we're now tracking a timeout budget that gets decreased and properly saved in checkpoints, so that recovery in the future works.
It is sometimes hard to find all failing tests in buildkite output logs - even filtering for "FAILED" is cumbersome as the output can be overloaded. This PR adds a small utility to add a short summary log in a separate output section at the end of the buildkite job.
The only shared directory between the Buildkite host machine and the test docker container is `/tmp/artifacts:/artifact-mount`. Thus, we write the summary file to this directory, and delete it before actually uploading it as an artifact in the `post-commands` hook.
ray.train.Trainer and ray.tune.integration.*.DistributedTrainableCreator will be deprecated in Ray 2.0 in favor of Ray AIR. In Ray 1.13, we should warn about this pending deprecation.
First step towards #23014
This PR adds basic stats instrumentation of split_at_indices(), the first stage in fully instrumenting split operations. See https://github.com/ray-project/ray/issues/24178 for future steps.
Rolling out next deprecation cycle:
- DeprecationWarnings that were `warnings.warn` or `logger.warn` before are now raised errors
- Raised Deprecation warnings are now removed
- Notably, this involves deprecating the TrialCheckpoint functionality and associated cloud tests
- Added annotations to deprecation warning for when to fully remove
This PR depends on #23754.
#23754 removes the need for index in the StoreClient interface.
This PR unifies InternalKVInterface and StoreClient. Specifically, we implement an InternalKVInterface which wraps around StoreClient.