In rare cases (#19274) (and possibly old versions of Ray), buffered results can lead to calling on_trial_complete multiple times with the same trial ID. In these cases, Optuna should gracefully handle this case and discard the results.
Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods.
With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
Import actor dependency when not found, so actor dependencies can be imported without the importer thread.
Remaining blockers to remove importer thread are to support running a function on all workers `run_function_on_all_workers()`, and raising a warning when the same function / class is exported too many times.
The Serve REST API relies on YAML config files to specify and deploy deployments. This change introduces `serve.build()` and `serve build`, which translate Pipelines to YAML files.
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
The `serve status` command allows users to get their deployments' status info through the CLI. This change adds a tip to the health-checking documentation to inform users about `serve status`.
Added ensemble model examples to the Documentation. That was needed, due to a user request and there was no methodology outlining the creation of higher level ensemble models.
Co-authored-by: Jiao Dong <sophchess@gmail.com>
In https://github.com/ray-project/ray/blob/ray-1.11.0/docker/ray-ml/Dockerfile, the order of pip install commands currently matters (potentially a lot). It would be good to run one big pip install command to avoid ending up with a broken env.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
checkpoint_tmpxxxxxx directories must not be synced from the worker nodes to the head node.
Co-authored-by: Maxim Egorushkin <maxim.egorushkin@gmail.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
We sometimes end up with stale wheel uploads from previous runs of a Buildkite agent. The result is that commit wheels are being overwritten from old build jobs - effectively breaking the wheel build logic.
Example:
This Agent: https://buildkite.com/organizations/ray-project/agents/4b955117-2f6c-4849-b703-3457daf69f89
- builds wheels (in post-wheels tests) for a35ebc945b
- and then runs both the Ray CPP worker and the Train + Tune tests in 6746e9f
- Usually these two tests shouldn't provide artifacts at all, but they do - these are the wheels from a35ebc945b though! Meaning these are uncleaned leftovers from the first build task.
- See here for proof of artifact upload: https://buildkite.com/ray-project/ray-builders-pr/builds/27622#d11bc514-ebd8-4e0c-a2ce-826b9bad27de
The solution is thus to always clean up the artifacts directory in the worker, i.e. `rm -rf /artifact-mount/*`
This PR adds two of such clean up instructions - once before commands are run and once after artifacts are uploaded. We can probably just do either, but it doesn't hurt to have both.
From #22954, GPU utilization can be unavailable for consumer hardware. So dashboard should not assume the value cannot be None.
There might be a better way to represent "not reported". But currently utilizations are summed up which makes using non-zero to represent "not reported" hard to do.
Previously failed with
```
E ray.exceptions.RayTaskError(TypeError): ray::_prepare_read() (pid=166631, ip=10.103.212.102)
E File "/home/swang/ray/python/ray/data/read_api.py", line 902, in _prepare_read
E return ds.prepare_read(parallelism, **kwargs)
E File "/home/swang/ray/python/ray/data/datasource/datasource.py", line 331, in prepare_read
E input_files=None,
E TypeError: __init__() missing 1 required keyword-only argument: 'exec_stats'
```
This PR adds the missing arg.
Add tuner tests.
These tests are mainly focusing on non ray client mode, including successful runs, and failures in both driver and trainer side and resume.
One issue surfaced through writing the tests (which probably means the API is not quite right) is whether RunConfig should be supplied in Tuner.init v.s. Tuner.fit(). At least for some fields in RunConfig, we want to be able to change it across runs (e.g. callbacks). Plus with current impl, it's not possible to checkpoint "stateful" callbacks, which could confuse our users. cc @ericl for API inputs. See "test_tuner_with_xgboost_trainer_driver_fail_and_resume" (search for hack).
The PR also cleans up some API docs.
Fixes some bugs in loading trial from checkpoint, namely get_default_resource (which probably is not necessary given self.placement_group_factory is already set anyways) is called with an empty config, as self.config is only loaded through __setstate__, which happens later than get_default_resource. Remove the call to get_default_resource when loading trials from checkpoint.
This PR adds the API `setRuntimeEnv` for submitting a normal task, for the usage:
```java
RuntimeEnv runtimeEnv =
new RuntimeEnv.Builder()
.addEnvVar("KEY1", "A")
.build();
/// Return `A`
Ray.task(RuntimeEnvTest::getEnvVar, "KEY1").setRuntimeEnv(runtimeEnv).remote().get();
```
To address the issue https://github.com/ray-project/ray/issues/22824
Basically the current behavior of `max_retries` in workflow is different from the one in remote functions in the following ways:
1. workflow's max_retries is not the number of retries, but the number of total tries.
2. workflow's max_retries does not allow "-1" (infinite retries) while remote function's max_retries does.
This PR altered the behavior of `max_retries` in workflow to be consistent with the `max_retries` in remote functions:
1. make max_retries to be truly max retries (i.e. total tries = original try + max retries)
- [x] implementation
- [x] update logging
- [x] update tests
2. make max_retries accept infinite tries (i.e. `max_retries=-1`)
Getting or creating a named actor is a common pattern, however it is somewhat esoteric in how to achieve this. Add a utility function and test that it doesn't cause any scary error messages.
Actor.options(name="my_singleton", get_if_exists=True).remote(args)
This PR adds a test of KubeRay autoscaler integration to the Ray CI.
- Tests scaling with autoscaler.sdk.request_resources
- Tests autoscaler response to RayCluster CR change
Certain external integrations rely on ray._private.use_gcs_for_bootstrap to determine if Ray is using the gcs to bootstrap. The current version of Ray always uses the gcs to bootstrap, so this should just return True.
Algolia search now does not overflow on mobile devices anymore, making the nav scrollable again.
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>