The PR https://github.com/ray-project/ray/pull/22820 introduced a API breakage for xlang usage, causing that `ray.java_actor_class` has not been available any longer from then on.
I'm fixing it in this PR. We should remove these top level APIs in 2.0 instead of minor versions.
This PR adds a RLTrainer to Ray AIR. It works for both offline and online use cases. In offline training, it will leverage the datasets key of the Trainer API to specify a dataset reader input, used e.g. in Behavioral Cloning (BC). In online training, it is a wrapper around the rllib trainables making use of the parameter layering enabled by the Trainer API.
Using the local rank as the device id only works if there is exactly 1 GPU per worker. Instead we should be using ray.get_gpu_ids() to determine which GPU device to use for the worker.
Adding a large-scale nightly test for Datasets random_shuffle and sort. The test script generates random blocks and reports total run time and peak driver memory.
Ray use gcs in memory store by default instead of Redis, which cause gloo group doesn't work by default.
In this PR, we use Ray internal kv for the store of gloo group to replace the RedisStore by default, to make gloo group work well.
This PR depends on another PR in pygloo https://github.com/ray-project/pygloo/pull/10
We want to limit the maximum memory used for each subscribed entity, e.g. to avoid having GCS run out of memory if some workers publish a huge amount of logs and subscribers cannot keep up.
After this change, Ray publishers maintain one message buffer for all subscribers of an entity, and one message buffer for all subscribers of all entities in the channel.
The limit can be configured with publisher_entity_buffer_max_bytes. The default value is 10MiB.
As we (@scv119 @raulchen @Chong-Li @WangTaoTheTonic) discussed offline, and @scv119 also mentioned it in (#23323 (comment)). I refactor the interface of ISchedulingPolicy and make it expose only one batch interface., and still provide a SingleSchedulePolicy to be compatible with single scheduler.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
1. Dataset pipeline is advanced usage of Ray Dataset, which should not jam into the Getting Started page
2. We already have a separate/dedicated page called Pipelining Compute to cover the same content
What: If BUILDKITE_PULL_REQUEST_REPO is empty string, default to DEFAULT_REPO
Why: BUILDKITE_PULL_REQUEST_REPO is set to an empty string per default, thus we're currently not detecting the buildkite repo correctly in branched builds.
Replaces FLAML searchers with a dummy class that throws an informative error on init if FLAML is not installed, removes ConfigSpace import in BOHB example code, adds a note to examples using external dependencies.
The `Application` class is stored in `api.py`. The object is relatively standalone and is used as a dependency in other classes, so this change moves `Application` (and `ImmutableDeploymentDict`) to a new file, `application.py`.
When deployments fail to update, [Serve sets their status to UNHEALTHY and logs the error message](46465abd6d/python/ray/serve/deployment_state.py (L1507-L1511)). However, the message lacks a traceback, making it impossible to find what caused it. [For example](https://console.anyscale.com/o/anyscale-internal/projects/prj_2xR6uT6t7jJuu1aCwWMsle/clusters/ses_SfGPJq8WWJUhAvmHHsDgJWUe?command-history-section=command_history):
```
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/serve/api.py", line 328, in _wait_for_deployment_healthy
raise RuntimeError(f"Deployment {name} is UNHEALTHY: {status.message}")
RuntimeError: Deployment echo is UNHEALTHY: Failed to update deployment:
'>' not supported between instances of 'NoneType' and 'int'.
```
It's not clear where `'>' not supported between instances of 'NoneType' and 'int'.` is being triggered.
The change includes the full traceback for this type of update failure. The new status message is easier to debug:
```
File "/Users/shrekris/Desktop/ray/python/ray/serve/api.py", line 328, in _wait_for_deployment_healthy
raise RuntimeError(f"Deployment {name} is UNHEALTHY: {status.message}")
RuntimeError: Deployment A is UNHEALTHY: Failed to update deployment:
Traceback (most recent call last):
File "/Users/shrekris/Desktop/ray/python/ray/serve/deployment_state.py", line 1503, in update
running_replicas_changed |= self._check_and_update_replicas()
File "/Users/shrekris/Desktop/ray/python/ray/serve/deployment_state.py", line 1396, in _check_and_update_replicas
a = 1/0
ZeroDivisionError: division by zero
```
(I forced a divide-by-zero error to get this traceback).
This change adds new unit tests and error message to _store_package_in_gcs(). In particular, it tests the function's behavior when it fails to connect to the GCS.
Use spot instances for chaos tests.
We can also experiment with other tests that don't suppose to have dead nodes, but let's do it once the nightly infra is stabilized
Currently, the github issue templates are quite verbose, and as a result few developers end up using them. Clean them up so that they can be the standard workflow for all Ray contributors.
What: Long running tests should use sdk file manager
Why: Job submission server seems to crash under load, using the sdk file manager ensures we can still fetch results after a run.
What: Raise meaningful exceptions when invalid parameters are passed.
Why: We want to catch invalid parameters and guide users to use the API in the correct way.
Adds a dynamic property to easily obtain `config` dict from `Result`, extends the `ResultGrid.get_best_config` method for parity with `ExperimentAnalysis.get_best_trial` (allows for using of mode and metric different to the one set in the Tuner).
Currently Datasets primitives repartition, groupby, sort, and random_shuffle all use different internal shuffle implementations. This PR unifies them on a single internal ShuffleOp class. This class exposes static methods for map and reduce which must be implemented by the specific higher-level primitive. Then the ShuffleOp.execute method implements a simple pull-based shuffle by submitting one map task per input block and one reduce task per output block.
Closes#23593.
If rsync/ssh is not available (as in kubernetes setups), Tune previously had no fallback mechanism to synchronize trial directories to the driver. This PR introduces a `RemoteTaskSyncer` trial syncer that uses ray remote tasks to ship file contents between nodes. The implementation utilizes tarfile to compress files for transfer, and it only transfers files that differ between the source and target directory to minimize network bandwidth usage.
The trial syncer works as follows:
1. It collects information about existing files in the target directory. This directory could be remote (when syncing up) or local (when syncing down).
2. It then schedules a `pack` task on the source node. This will always be a remote task so the future can be passed to the unpack task. The pack task will only pack files that are not existent or different in the target directory into a tarfile, which is returned as a bytes string
3. An `unpack` task in scheduled on the target node. This will always be a remote task so the future can be awaited in a call to `wait()`
A test is added to ensure that only modified files are transferred on subsequent sync ups/downs.
Finally, minor changes are made to the `Syncer`/`NodeSyncer` classes to allow passing `(ip, path)` tuples rather than rsync-style remote paths.