Updates @ray.method error message to match the one for @ray.remote. Since the client mode version of ray.method is identical to the regular ray.method, deletes the client mode version and drops the client_mode_hook decorator (guessing that the client copy was added before client_mode_hook was introduced).
Also fixes what I'm guessing is a bug that doesn't allow both num_returns and concurrency_group to be specified at the same time (assert len(kwargs) == 1).
Closes#23271
Copied from #22571:
Whenever we spill, we try to spill all spillable objects. We also try to fuse small objects together to reduce total IOPS. If there aren't enough objects in the object store to meet the fusion threshold, we spill the objects anyway to avoid liveness issues. However, currently we spill at most the object fusion size when instead we should be spilling at least the fusion size. Then we use the max number of fused objects as a cap.
This PR fixes the fusion behavior so that we always spill at minimum the fusion size. If we reach the end of the spillable objects, and we are under the fusion threshold, we'll only spill it if we don't have other spills pending too. This gives the pending spills time to finish, and then we can re-evaluate whether it's necessary to spill the remaining objects. Liveness is also preserved.
Increases some test timeouts to allow tests to pass.
Initial draft of the interface for HuggingFaceTorchTrainer.
One alternative for limiting the number of datasets in datasets dict would be to have the user pass train_dataset and validation_dataset as separate arguments, though that would be inconsistent with TorchTrainer.
placement_group_test_5 is flakey. Reason is requesting PG with exact object store memory as node. If object store has object, then PG scheduling fails.
Also fix bug - typo.
* Uniformly distributed tasks among actors to utilize full concurrency
* Added test to ensure all tasks are launched at the same time
* Applied linting format
Experiment tags are not always rendered in a sane way for all operating systems. For instance, a config of
```
"a": tune.choice([(3, 4), (5, 6)]),
"b": tune.choice([[7, 8], [6, 5]]),
```
will lead to an experiment dir like `lambda_53737_00000_0_a=_3, 4_,b=[7, 8]_2022-04-02_10-21-27/`. This can lead to problems with utilities such as gsutil (which misinterprets some characters as wildcards, see #23670), but also with e.g. MacOS which doesn't like `[` brackets in filenames.
This PR adds an improvement to the `_clean_value` function used to sanitize values. We specify a valid alphabet which includes a limited set of characters that is broadly usable in most operating systems. We also simplify the `format_vars` function - even though it was previously a bit more sophisticated in handling list items, this was error-prone, and can be replaced in favor of a better readable and simpler implementation that yields the same results in almost all cases.
`api.py` has accumulated classes and functions that aren't purely public APIs, causing circular dependencies. This change pulls `Deployment` and deployment graph-related features out of `api.py` and puts them in two new files: `deployment.py` and `deployment_graph.py`.
* Make default memory 1
* Add test to validate that ReplicaConfig's default memory cannot be lower than minimum
* Add a new option to memory_omitted_options
* Update if branch in test_replica_config_default_memory_minimum
* Make memory default value None
We use tarfile to pack/unpack directories in several locations. Instead of using temporary files, we can just use io.BytesIO to avoid unnecessary disk writes.
Note that this functionality is present in 3 different modules - in Ray (AIR), in the release test package, and in a specific release test. The implementations should live in the three modules independently, so we don't add a common utility for this (e.g. the ray_release package should be independent of the Ray package).
There are a few changes:
1. Between runner thread and main thread: The same stacktrace is raised in `_report_thread_runner_error` in main thread. So we could spare this raise in runner thread.
2. Between function runner and Tune driver: Do not wrap RayTaskError in TuneError.
3. Within Tune driver code: Introduces a per errored trial error.pkl and uses that to populate ResultGrid.
Plus some cleanups to facilitate propagating exception in runner and executor code.
Final stacktrace looks like: (omitted)
In Tune, we are capturing `traceback.format_exc` at the time the exception is caught and just pass the string around. This PR slightly changes that only in the case of when RayTaskError is raised, and we pass that object around.
It may be worthwhile to settle down on a practice of error handling in Tune in general.
I am also curious to learn how other ray library does that and any good lessons to learn.
In particular, we should watch out for memory leaking in exception handling. Not sure if it is still a problem in python 3, but here are some articles I came across for reference
https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
As discussed in #23424, the synch=True mode of PopulationBasedTrainingScheduler is (1) not compatible with burn_in_period and (2) causes the presence of TERMINATED trials to hang PAUSED trials indefinitely.
This change addresses (1) by setting the initial _next_perturbaton_sync to the max of burn_in_period and perturbation_interval in the constructor and (2) by checking only whether live trials have reached the _next_perturbation_sync before resuming PAUSED trials.
This PR addresses recent failures in the tune cloud tests.
In particular, this PR changes the following:
The trial runner will now wait for potential previous syncs to finish before syncing once more if force=True is supplied. This is to make sure that the final experiment checkpoints exist in the most recent version on remote storage. This likely fixes some flakiness in the tests.
We switched to new cloud buckets that don't interfere with other tests (and are less likely to be garbage collected)
We're now using dated subdirectories in the cloud buckets so that we don't interfere if two tests are run in parallel. Objects are cleaned up afterwards. The buckets are configured to remove objects after 30 days.
Lastly, we fix an issue in the cloud tests where the RELEASE_TEST_OUTPUT file was unavailable when run in Ray client mode (as e.g. in kubernetes).
Local release test runs succeeded.
https://buildkite.com/ray-project/release-tests-branch/builds/189https://buildkite.com/ray-project/release-tests-branch/builds/191
A common user confusion is that their dataset parallelism is limited by the number of files. Add a warning if the available parallelism is much less than the specified parallelism, and tell the user to repartition() in that case.
Continuation of #22449
Fix pip activation so something like this will not crash
```
ray.init(runtime_env={"pip": ["toolz", "requests"]})
```
Also enable test that hit this code path.
Various improvements to Ray Train fault tolerance.
Add more log statements for better debugging of Ray Train failure handling.
Fixes [Bug] [Train] Cannot reproduce fault-tolerance, script hangs upon any node shutdown #22349.
Simplifies fault tolerance by removing backend specific handle_failure. If any workers have failed, all workers will be restarted and training will continue from the last checkpoint.
Also adds a test for fault tolerance with an actual torch example. When testing locally, the test hangs before the fix, but passes after.
The current behavior of workflow's `.options()` is to **completely rewrite all the options** rather than **update options**, this is less intuitive and inconsistent with the behavior of `.options()` in remote functions.
For example:
```
# Remote Function
@ray.remote(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **updated** num_cpus while **the rest options are untouched**, i.e. max_retires is still 2. This is the expected behavior and more intuitive.
```
# Workflow Step
@workflow.step(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **completely drop all existing options** and only set num_cpus, i.e. previous value of max_retires (2) is dropped and reverted to default (3). This will also drop other fields like `name` and `metadata` if name and metadata are given in the decorator but not in the options().
`test_metrics` scales quite high on https://flakey-tests.ray.io/#owner=core. This test is often hitting the timeout limit. Making it larger should help the test pass.
In rare cases (#19274) (and possibly old versions of Ray), buffered results can lead to calling on_trial_complete multiple times with the same trial ID. In these cases, Optuna should gracefully handle this case and discard the results.
Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods.
With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
Import actor dependency when not found, so actor dependencies can be imported without the importer thread.
Remaining blockers to remove importer thread are to support running a function on all workers `run_function_on_all_workers()`, and raising a warning when the same function / class is exported too many times.
The Serve REST API relies on YAML config files to specify and deploy deployments. This change introduces `serve.build()` and `serve build`, which translate Pipelines to YAML files.
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>