Various improvements to Ray Train fault tolerance.
Add more log statements for better debugging of Ray Train failure handling.
Fixes [Bug] [Train] Cannot reproduce fault-tolerance, script hangs upon any node shutdown #22349.
Simplifies fault tolerance by removing backend specific handle_failure. If any workers have failed, all workers will be restarted and training will continue from the last checkpoint.
Also adds a test for fault tolerance with an actual torch example. When testing locally, the test hangs before the fix, but passes after.
Running Datasets shuffle with 1TB data and 2k partitions sometimes times out due to a failed object fetch. This happens because the object directory notifies the PullManager that the object is already on the local node, even though it isn't. This seems to be a bug in the object directory.
To work around this on the PullManager side, this PR filters out the current node from the list of locations provided by the object directory. @jjyao confirmed that this fixes the issue for Datasets shuffle.
The current behavior of workflow's `.options()` is to **completely rewrite all the options** rather than **update options**, this is less intuitive and inconsistent with the behavior of `.options()` in remote functions.
For example:
```
# Remote Function
@ray.remote(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **updated** num_cpus while **the rest options are untouched**, i.e. max_retires is still 2. This is the expected behavior and more intuitive.
```
# Workflow Step
@workflow.step(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **completely drop all existing options** and only set num_cpus, i.e. previous value of max_retires (2) is dropped and reverted to default (3). This will also drop other fields like `name` and `metadata` if name and metadata are given in the decorator but not in the options().
This is the first PR to refactor scheduler data structures (See #22850).
Major changes:
- Hid the implementation details in the `ResourceRequest` and `TaskResourceInstnaces` classes, which expose public methods such as algebra operators and comparison operators.
- Hid the differences between "predefined" and "custom" resources inside these 2 classes. Call sites can simply use the resource ID to access the resource, no matter it is predefined or custom.
- The predefined_resources vector now always has the full length. So no more "resize"s are needed.
- Removed the `ResourceCapacity` class. Now "total" and "available" resources are stored in separate fields in "NodeResources".
- Moved helper functions for FixedPoint vectors from "cluster_resource_data.h" to "fixed_point.h"
- "ResourceID" now has static methods to get the resource ids of predefined resources, e.g. "ResourceID::CPU()".
- Encapsulated unit-instance resource logic to "ResourceID"
Other planned changes that are not included in this PR:
- Rename ResourceRequest to ResourceSet, and move it to its own file.
- Remove the predefined vectors and always use maps.
Co-authored-by: Chong-Li <lc300133@antgroup.com>
Why are these changes needed?
Linkcheck is inherently flaky, so separate it from the normal LINT build which is never flaky. This also separates the verbose linkcheck logs, making it easier to read the LINT output.
`test_metrics` scales quite high on https://flakey-tests.ray.io/#owner=core. This test is often hitting the timeout limit. Making it larger should help the test pass.
In rare cases (#19274) (and possibly old versions of Ray), buffered results can lead to calling on_trial_complete multiple times with the same trial ID. In these cases, Optuna should gracefully handle this case and discard the results.
Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods.
With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
Import actor dependency when not found, so actor dependencies can be imported without the importer thread.
Remaining blockers to remove importer thread are to support running a function on all workers `run_function_on_all_workers()`, and raising a warning when the same function / class is exported too many times.
The Serve REST API relies on YAML config files to specify and deploy deployments. This change introduces `serve.build()` and `serve build`, which translate Pipelines to YAML files.
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
The `serve status` command allows users to get their deployments' status info through the CLI. This change adds a tip to the health-checking documentation to inform users about `serve status`.
Added ensemble model examples to the Documentation. That was needed, due to a user request and there was no methodology outlining the creation of higher level ensemble models.
Co-authored-by: Jiao Dong <sophchess@gmail.com>
In https://github.com/ray-project/ray/blob/ray-1.11.0/docker/ray-ml/Dockerfile, the order of pip install commands currently matters (potentially a lot). It would be good to run one big pip install command to avoid ending up with a broken env.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
checkpoint_tmpxxxxxx directories must not be synced from the worker nodes to the head node.
Co-authored-by: Maxim Egorushkin <maxim.egorushkin@gmail.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
We sometimes end up with stale wheel uploads from previous runs of a Buildkite agent. The result is that commit wheels are being overwritten from old build jobs - effectively breaking the wheel build logic.
Example:
This Agent: https://buildkite.com/organizations/ray-project/agents/4b955117-2f6c-4849-b703-3457daf69f89
- builds wheels (in post-wheels tests) for a35ebc945b
- and then runs both the Ray CPP worker and the Train + Tune tests in 6746e9f
- Usually these two tests shouldn't provide artifacts at all, but they do - these are the wheels from a35ebc945b though! Meaning these are uncleaned leftovers from the first build task.
- See here for proof of artifact upload: https://buildkite.com/ray-project/ray-builders-pr/builds/27622#d11bc514-ebd8-4e0c-a2ce-826b9bad27de
The solution is thus to always clean up the artifacts directory in the worker, i.e. `rm -rf /artifact-mount/*`
This PR adds two of such clean up instructions - once before commands are run and once after artifacts are uploaded. We can probably just do either, but it doesn't hurt to have both.
From #22954, GPU utilization can be unavailable for consumer hardware. So dashboard should not assume the value cannot be None.
There might be a better way to represent "not reported". But currently utilizations are summed up which makes using non-zero to represent "not reported" hard to do.
Previously failed with
```
E ray.exceptions.RayTaskError(TypeError): ray::_prepare_read() (pid=166631, ip=10.103.212.102)
E File "/home/swang/ray/python/ray/data/read_api.py", line 902, in _prepare_read
E return ds.prepare_read(parallelism, **kwargs)
E File "/home/swang/ray/python/ray/data/datasource/datasource.py", line 331, in prepare_read
E input_files=None,
E TypeError: __init__() missing 1 required keyword-only argument: 'exec_stats'
```
This PR adds the missing arg.
Add tuner tests.
These tests are mainly focusing on non ray client mode, including successful runs, and failures in both driver and trainer side and resume.
One issue surfaced through writing the tests (which probably means the API is not quite right) is whether RunConfig should be supplied in Tuner.init v.s. Tuner.fit(). At least for some fields in RunConfig, we want to be able to change it across runs (e.g. callbacks). Plus with current impl, it's not possible to checkpoint "stateful" callbacks, which could confuse our users. cc @ericl for API inputs. See "test_tuner_with_xgboost_trainer_driver_fail_and_resume" (search for hack).
The PR also cleans up some API docs.
Fixes some bugs in loading trial from checkpoint, namely get_default_resource (which probably is not necessary given self.placement_group_factory is already set anyways) is called with an empty config, as self.config is only loaded through __setstate__, which happens later than get_default_resource. Remove the call to get_default_resource when loading trials from checkpoint.