In the current code base, `multiprocessing.Pool.imap_unordered` fails when it is called with an iterator (for which the length is not known on the first call). For example, the following code would fail:
```
import ray.util.multiprocessing as raymp
# test function
def func(input):
print('run func [{}]'.format(input))
return input
with raymp.Pool() as pool:
# this fails with a TypeError (could not serialize)
print('use an iterator')
for x in pool.imap_unordered(func, iter(range(5))):
print('Finished [{}]'.format(x))
```
## Summary of changes
* I made changes to the `ResultThread` class that enable it to work with argument `total_object_refs=0`. This will let it run until a call to `stop()` is received.
* I have adapted the `IMapIterator` class to better check input arguments and distinguish between iterables and iterators.
* The super classes `OrderedIMapIterator` and `UnorderedIMapIterator` have been updated to stop appropriately when iterators are used, and explicitly stop the `_result_thread`.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
This adds the RLPredictor implementation as the counter part for the RLTrainer. An evaluation using the predictor was added to the rl trainer end to end example.
Adds a new flag `stop_last_trials` to AsyncHyperband that allows the last trials of each bracket to continue training after `max_t`. This feature existed for synchronous hyperband before, and the extension had been requested in #14235.
This PR moves function import to a lazy way. Several benefits of this:
- worker start up is faster since it doesn't need to go through all functions exported
- gcs pressure is smaller since 1) we don't need to export key and 2) all loads are done when needed.
- get rid of function table channel
Previously, the `TimeoutStopper` did not work after recovery from checkpoints in the future, as the start time + budget was exceeded. Instead, we're now tracking a timeout budget that gets decreased and properly saved in checkpoints, so that recovery in the future works.
It is sometimes hard to find all failing tests in buildkite output logs - even filtering for "FAILED" is cumbersome as the output can be overloaded. This PR adds a small utility to add a short summary log in a separate output section at the end of the buildkite job.
The only shared directory between the Buildkite host machine and the test docker container is `/tmp/artifacts:/artifact-mount`. Thus, we write the summary file to this directory, and delete it before actually uploading it as an artifact in the `post-commands` hook.
ray.train.Trainer and ray.tune.integration.*.DistributedTrainableCreator will be deprecated in Ray 2.0 in favor of Ray AIR. In Ray 1.13, we should warn about this pending deprecation.
First step towards #23014
This PR adds basic stats instrumentation of split_at_indices(), the first stage in fully instrumenting split operations. See https://github.com/ray-project/ray/issues/24178 for future steps.
Rolling out next deprecation cycle:
- DeprecationWarnings that were `warnings.warn` or `logger.warn` before are now raised errors
- Raised Deprecation warnings are now removed
- Notably, this involves deprecating the TrialCheckpoint functionality and associated cloud tests
- Added annotations to deprecation warning for when to fully remove
This PR depends on #23754.
#23754 removes the need for index in the StoreClient interface.
This PR unifies InternalKVInterface and StoreClient. Specifically, we implement an InternalKVInterface which wraps around StoreClient.
Ray SGD v1 has been denoted as a deprecated API for a while. This PR fully deprecates Ray SGD v1. An error will be raised if ray.util.sgd package is attempted to be imported.
Closes#16435
Show usage stats prompt when it's enabled.
Current UX are:
* The usage stats enabled or disabled message is shown every time in both terminal and dashboard.
* If users don't explicitly enable or disable usage stats, the first time they start a ray cluster interactively, they will be asked to confirm and will enable if no user action within 10s. If it's non-interactive, collection is enabled by default without confirmation.
* ray.init() doesn't collect usage stats
* Usage stats can be disabled via three approaches: 1. RAY_USAGE_STATS_ENABLED env var, 2. ray xxx --disable-usage-stats, 3. ray disable-usage-stats
These hooks are specific to our Buildkite setup and require some context to be edited successfully. Thus they should be protected by codeowner approval.
Currently all jobs that build wheels put them into the artifacts directory and upload them. This leads to the wheels being overwritten on S3 multiple times. This is not a huge problem as ingress is free, but in order to have a single point of reference, it might be beneficial to limit the wheels uploading to a single Buildkite job. Recently, this has led to interference with stale artifact directories.
The downside here is that if the "Wheels & Jars" build fails randomly, the wheels will not be available on S3 - previously they've been also uploaded by several other jobs.
Commit 2cf4c72 ("[ray client] Fix ctrl-c for ray.get() by setting a
short-server side timeout") introduced a short server-side timeout not
to block later operations.
However, the fix implicitly assumes that get() is complete within
MAX_BLOCKING_OPERATION_TIME_S (two seconds). This becomes a problem
when apps use heavy objects or limited network I/O bandwidth that
require more than two seconds to push all chunks. The current retry
logic needs to re-push from the beginning of chunks and block clients
with the infinite re-push.
I updated the logic to directly pass timeout if it is explicitly given.
Without timeout, it still uses MAX_BLOCKING_OPERATION_TIME_S for
polling with the short server-side timeout.
When reconnecting GCS client to GCS, the client can attempt to re-subscribe existing keys and channels. One mitigation is to unsubscribe first before re-subscribing, but it is complicated to implement and only done for actors. Instead, we can ignore subscribing to the same key in the GCS client.
This PR adds support for out-of-band serialization of datasets, i.e. for serializing and deserializing datasets across Ray clusters by serializing the dataset lineage. This PR is the final PR in a set to add such support (3/3).
Our current behavior is dropping all args / kwargs for both http and python, if user deployment function doesn't take any input. But in the meantime we didn't throw anything if user tries to invoke the function in python with actual args.
This PR adds this back, and added a bit special handling for http case with in-line comments.
Now given we directly return `ClassMethodNode` on `deployment_cls.bind()`, add a test to ensure chain of ClassMethod calls is consistent across ray dag and serve dag.
Note this only works on single replica, since if the class method mutates replica state, and there're multiple replicas running, replica states / result won't be consistent if request are routed to different ones.
dataset_shuffle_random_shuffle_1tb was previously failing due to OOM but has now passed on the last 4 runs due to changing the node type. These tests should be stable now, although we will want to look into the OOM issue later.
Users may want to provide Ray task option overrides for write tasks, e.g. having write tasks retried on application-level exceptions (retry_exceptions=True) or change the default number of retries (max_retries=8). This commit adds support for providing such task options for write tasks.