* Make default memory 1
* Add test to validate that ReplicaConfig's default memory cannot be lower than minimum
* Add a new option to memory_omitted_options
* Update if branch in test_replica_config_default_memory_minimum
* Make memory default value None
We use tarfile to pack/unpack directories in several locations. Instead of using temporary files, we can just use io.BytesIO to avoid unnecessary disk writes.
Note that this functionality is present in 3 different modules - in Ray (AIR), in the release test package, and in a specific release test. The implementations should live in the three modules independently, so we don't add a common utility for this (e.g. the ray_release package should be independent of the Ray package).
There are a few changes:
1. Between runner thread and main thread: The same stacktrace is raised in `_report_thread_runner_error` in main thread. So we could spare this raise in runner thread.
2. Between function runner and Tune driver: Do not wrap RayTaskError in TuneError.
3. Within Tune driver code: Introduces a per errored trial error.pkl and uses that to populate ResultGrid.
Plus some cleanups to facilitate propagating exception in runner and executor code.
Final stacktrace looks like: (omitted)
In Tune, we are capturing `traceback.format_exc` at the time the exception is caught and just pass the string around. This PR slightly changes that only in the case of when RayTaskError is raised, and we pass that object around.
It may be worthwhile to settle down on a practice of error handling in Tune in general.
I am also curious to learn how other ray library does that and any good lessons to learn.
In particular, we should watch out for memory leaking in exception handling. Not sure if it is still a problem in python 3, but here are some articles I came across for reference
https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
As discussed in #23424, the synch=True mode of PopulationBasedTrainingScheduler is (1) not compatible with burn_in_period and (2) causes the presence of TERMINATED trials to hang PAUSED trials indefinitely.
This change addresses (1) by setting the initial _next_perturbaton_sync to the max of burn_in_period and perturbation_interval in the constructor and (2) by checking only whether live trials have reached the _next_perturbation_sync before resuming PAUSED trials.
This PR addresses recent failures in the tune cloud tests.
In particular, this PR changes the following:
The trial runner will now wait for potential previous syncs to finish before syncing once more if force=True is supplied. This is to make sure that the final experiment checkpoints exist in the most recent version on remote storage. This likely fixes some flakiness in the tests.
We switched to new cloud buckets that don't interfere with other tests (and are less likely to be garbage collected)
We're now using dated subdirectories in the cloud buckets so that we don't interfere if two tests are run in parallel. Objects are cleaned up afterwards. The buckets are configured to remove objects after 30 days.
Lastly, we fix an issue in the cloud tests where the RELEASE_TEST_OUTPUT file was unavailable when run in Ray client mode (as e.g. in kubernetes).
Local release test runs succeeded.
https://buildkite.com/ray-project/release-tests-branch/builds/189https://buildkite.com/ray-project/release-tests-branch/builds/191
A common user confusion is that their dataset parallelism is limited by the number of files. Add a warning if the available parallelism is much less than the specified parallelism, and tell the user to repartition() in that case.
Continuation of #22449
Fix pip activation so something like this will not crash
```
ray.init(runtime_env={"pip": ["toolz", "requests"]})
```
Also enable test that hit this code path.
Various improvements to Ray Train fault tolerance.
Add more log statements for better debugging of Ray Train failure handling.
Fixes [Bug] [Train] Cannot reproduce fault-tolerance, script hangs upon any node shutdown #22349.
Simplifies fault tolerance by removing backend specific handle_failure. If any workers have failed, all workers will be restarted and training will continue from the last checkpoint.
Also adds a test for fault tolerance with an actual torch example. When testing locally, the test hangs before the fix, but passes after.
The current behavior of workflow's `.options()` is to **completely rewrite all the options** rather than **update options**, this is less intuitive and inconsistent with the behavior of `.options()` in remote functions.
For example:
```
# Remote Function
@ray.remote(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **updated** num_cpus while **the rest options are untouched**, i.e. max_retires is still 2. This is the expected behavior and more intuitive.
```
# Workflow Step
@workflow.step(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **completely drop all existing options** and only set num_cpus, i.e. previous value of max_retires (2) is dropped and reverted to default (3). This will also drop other fields like `name` and `metadata` if name and metadata are given in the decorator but not in the options().
`test_metrics` scales quite high on https://flakey-tests.ray.io/#owner=core. This test is often hitting the timeout limit. Making it larger should help the test pass.
In rare cases (#19274) (and possibly old versions of Ray), buffered results can lead to calling on_trial_complete multiple times with the same trial ID. In these cases, Optuna should gracefully handle this case and discard the results.
Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods.
With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
Import actor dependency when not found, so actor dependencies can be imported without the importer thread.
Remaining blockers to remove importer thread are to support running a function on all workers `run_function_on_all_workers()`, and raising a warning when the same function / class is exported too many times.
The Serve REST API relies on YAML config files to specify and deploy deployments. This change introduces `serve.build()` and `serve build`, which translate Pipelines to YAML files.
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Added ensemble model examples to the Documentation. That was needed, due to a user request and there was no methodology outlining the creation of higher level ensemble models.
Co-authored-by: Jiao Dong <sophchess@gmail.com>
In https://github.com/ray-project/ray/blob/ray-1.11.0/docker/ray-ml/Dockerfile, the order of pip install commands currently matters (potentially a lot). It would be good to run one big pip install command to avoid ending up with a broken env.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
checkpoint_tmpxxxxxx directories must not be synced from the worker nodes to the head node.
Co-authored-by: Maxim Egorushkin <maxim.egorushkin@gmail.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Previously failed with
```
E ray.exceptions.RayTaskError(TypeError): ray::_prepare_read() (pid=166631, ip=10.103.212.102)
E File "/home/swang/ray/python/ray/data/read_api.py", line 902, in _prepare_read
E return ds.prepare_read(parallelism, **kwargs)
E File "/home/swang/ray/python/ray/data/datasource/datasource.py", line 331, in prepare_read
E input_files=None,
E TypeError: __init__() missing 1 required keyword-only argument: 'exec_stats'
```
This PR adds the missing arg.
Add tuner tests.
These tests are mainly focusing on non ray client mode, including successful runs, and failures in both driver and trainer side and resume.
One issue surfaced through writing the tests (which probably means the API is not quite right) is whether RunConfig should be supplied in Tuner.init v.s. Tuner.fit(). At least for some fields in RunConfig, we want to be able to change it across runs (e.g. callbacks). Plus with current impl, it's not possible to checkpoint "stateful" callbacks, which could confuse our users. cc @ericl for API inputs. See "test_tuner_with_xgboost_trainer_driver_fail_and_resume" (search for hack).
The PR also cleans up some API docs.
Fixes some bugs in loading trial from checkpoint, namely get_default_resource (which probably is not necessary given self.placement_group_factory is already set anyways) is called with an empty config, as self.config is only loaded through __setstate__, which happens later than get_default_resource. Remove the call to get_default_resource when loading trials from checkpoint.
To address the issue https://github.com/ray-project/ray/issues/22824
Basically the current behavior of `max_retries` in workflow is different from the one in remote functions in the following ways:
1. workflow's max_retries is not the number of retries, but the number of total tries.
2. workflow's max_retries does not allow "-1" (infinite retries) while remote function's max_retries does.
This PR altered the behavior of `max_retries` in workflow to be consistent with the `max_retries` in remote functions:
1. make max_retries to be truly max retries (i.e. total tries = original try + max retries)
- [x] implementation
- [x] update logging
- [x] update tests
2. make max_retries accept infinite tries (i.e. `max_retries=-1`)
Getting or creating a named actor is a common pattern, however it is somewhat esoteric in how to achieve this. Add a utility function and test that it doesn't cause any scary error messages.
Actor.options(name="my_singleton", get_if_exists=True).remote(args)
This PR adds a test of KubeRay autoscaler integration to the Ray CI.
- Tests scaling with autoscaler.sdk.request_resources
- Tests autoscaler response to RayCluster CR change
Certain external integrations rely on ray._private.use_gcs_for_bootstrap to determine if Ray is using the gcs to bootstrap. The current version of Ray always uses the gcs to bootstrap, so this should just return True.