From #22954, GPU utilization can be unavailable for consumer hardware. So dashboard should not assume the value cannot be None.
There might be a better way to represent "not reported". But currently utilizations are summed up which makes using non-zero to represent "not reported" hard to do.
Previously failed with
```
E ray.exceptions.RayTaskError(TypeError): ray::_prepare_read() (pid=166631, ip=10.103.212.102)
E File "/home/swang/ray/python/ray/data/read_api.py", line 902, in _prepare_read
E return ds.prepare_read(parallelism, **kwargs)
E File "/home/swang/ray/python/ray/data/datasource/datasource.py", line 331, in prepare_read
E input_files=None,
E TypeError: __init__() missing 1 required keyword-only argument: 'exec_stats'
```
This PR adds the missing arg.
Add tuner tests.
These tests are mainly focusing on non ray client mode, including successful runs, and failures in both driver and trainer side and resume.
One issue surfaced through writing the tests (which probably means the API is not quite right) is whether RunConfig should be supplied in Tuner.init v.s. Tuner.fit(). At least for some fields in RunConfig, we want to be able to change it across runs (e.g. callbacks). Plus with current impl, it's not possible to checkpoint "stateful" callbacks, which could confuse our users. cc @ericl for API inputs. See "test_tuner_with_xgboost_trainer_driver_fail_and_resume" (search for hack).
The PR also cleans up some API docs.
Fixes some bugs in loading trial from checkpoint, namely get_default_resource (which probably is not necessary given self.placement_group_factory is already set anyways) is called with an empty config, as self.config is only loaded through __setstate__, which happens later than get_default_resource. Remove the call to get_default_resource when loading trials from checkpoint.
This PR adds the API `setRuntimeEnv` for submitting a normal task, for the usage:
```java
RuntimeEnv runtimeEnv =
new RuntimeEnv.Builder()
.addEnvVar("KEY1", "A")
.build();
/// Return `A`
Ray.task(RuntimeEnvTest::getEnvVar, "KEY1").setRuntimeEnv(runtimeEnv).remote().get();
```
To address the issue https://github.com/ray-project/ray/issues/22824
Basically the current behavior of `max_retries` in workflow is different from the one in remote functions in the following ways:
1. workflow's max_retries is not the number of retries, but the number of total tries.
2. workflow's max_retries does not allow "-1" (infinite retries) while remote function's max_retries does.
This PR altered the behavior of `max_retries` in workflow to be consistent with the `max_retries` in remote functions:
1. make max_retries to be truly max retries (i.e. total tries = original try + max retries)
- [x] implementation
- [x] update logging
- [x] update tests
2. make max_retries accept infinite tries (i.e. `max_retries=-1`)
Getting or creating a named actor is a common pattern, however it is somewhat esoteric in how to achieve this. Add a utility function and test that it doesn't cause any scary error messages.
Actor.options(name="my_singleton", get_if_exists=True).remote(args)
This PR adds a test of KubeRay autoscaler integration to the Ray CI.
- Tests scaling with autoscaler.sdk.request_resources
- Tests autoscaler response to RayCluster CR change
Certain external integrations rely on ray._private.use_gcs_for_bootstrap to determine if Ray is using the gcs to bootstrap. The current version of Ray always uses the gcs to bootstrap, so this should just return True.
Algolia search now does not overflow on mobile devices anymore, making the nav scrollable again.
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
1. Support setting environment variables in runtime env for a job, like:
```yaml
ray : {
job : {
runtime-env: {
// Environment variables to be set on worker processes in current job.
"env-vars": {
// key1: "value11"
// key2: "value22"
}
}
}
}
```
It could be set by system properties before `Ray.init()` as well:
```java
System.setProperty("ray.job.runtime-env.env-vars.KEY1", "A");
System.setProperty("ray.job.runtime-env.env-vars.KEY2", "B");
Ray.init();
```
2. Setting environment variables for an actor will overwrite and merge to the environment variables of job.
```java
System.setProperty("ray.job.runtime-env.env-vars.KEY1", "A");
System.setProperty("ray.job.runtime-env.env-vars.KEY2", "B");
Ray.init();
RuntimeEnv runtimeEnv = new RuntimeEnv.Builder().addEnvVar("KEY1", "C").build();
/// actor1 has the env vars: {"KEY1" : "C", "KEY2" : "B"}
ActorHandle<A> actor1 = Ray.actor(A::new).setRuntimeEnv(runtimeEnv).remote();
/// actor2 has the env vars: {"KEY1" : "A", "KEY2" : "B"}
ActorHandle<A> actor2 = Ray.actor(A::new).remote();
```
Why are these changes needed?
This adds a ray-storage based spilling backend, which can be enabled by setting the spill config to {"type": "ray_storage", "buffer_size": N}. This will cause Ray to spill to the configured storage (pyarrow FS).
In a future PR, I'll add documentation and deprecate the existing smart_open backend.
Differentiate between a "resources not available" error vs. other types of errors.
Had this happen to me when I was trying out the fake cluster- I was using Ray client incorrectly, but because we were doing a generic except Exception, this was raised as "Timed out waiting for resources"