Especially when using datasets, we sometimes run into very long string representations. Tune should make sure to cut these according to a specified maximum length.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
This PR includes the changes from #24172
This PR adds an end-to-end training and serving example for the RLTrainer/RLPredictor. It also adds an `RLServeEnv` that can be used as an external env for rllib inference, querying the served policy from the RLPredictor.
This draft PR runs end to end, but I'd like to gather some initial feedback before promoting it to a full PR.
[User complains](https://discuss.ray.io/t/which-attributes-can-be-used-in-checkpoint-score-attr-when-using-tune-run/5826) about logging on failure of locating `checkpoint_score_attr ` in results dict not being informative.
I propose that we log the actual results dict keys and extended stopping criteria, which imho should not log the whole result dict as this might contain tensors.
Maybe there are other similar cases in tune library, in which I don't know my way around that good.
This PR fixes a typo in the KubeRay example config in Ray's docs.
Specifics:
Ray versions in the Ray repo's example KubeRay CR were recently updated from 1.11.0 to 1.12.0.
However, the worker group's Ray version was accidentally left at 1.11.0. This leads to alarming crash-looping when deploying the example in the docs.
This PR matches up the Ray images by setting the worker group to rayproject/ray:1.12.0.
* fix init() requires hardcoded storage path when connecting to existing cluster
* update tests with new init(storage) behavior
* update tests with latest api behavior
Make sure users can read csv with columns types specified.
Users may want to do this because sometimes PyArrow's type inference doesn't work as intended, in which case users can step in and work around the type inference.
Jackson is a widely-used utility. User from Ant reports the jackson class is conflicted between Ray jar and user's jar.
This PR shade the jackson in Ray jar to avoid the conflict.
Co-authored-by: Kai Yang <kfstorm@outlook.com>
Adds a from_huggingface method to Datasets, which allows the conversion of a Hugging Face Dataset to a Ray Dataset. As a Hugging Face Dataset is backed by an Arrow table, the conversion is trivial.
The documentation says that @ray.remote can take fractional num_gpus which is true, but the documentation lists it as an integer. I think this is strictly a problem in the docs.
This PR
- adds an example on how to run Ray Train and log results to weights & biases
- adds functionality to the W&B plugin to store checkpoints
- fixes a bug introduced in #24017
- Adds a CI utility script to setup credentials
- Adds a CI utility script to remove test state from external services cc @simon-mo
Improves Tune Jupyter notebook experience by modifying the `JupyterNotebookReporter` in two ways:
* Previously, the `overwrite` flag controlled whether the entire cell would be overwritten with the updated table. This caused all the other logs to be cleared. Now, we use IPython display handle functionality to create a table at the top of the cell and update only that, preserving the rest of the output. The `overwrite` flag now controls whether the cell output *prior* to the initialization of `JupyterNotebookReporter` is overwritten or not.
* The Ray Client detection was not working unless the user specifically passed a `JupyterNotebookReporter` as the `progress_reporter`. Now, the default value allows for correct detection of the enviroment while running Ray Client.
Furthermore, the progress reporter detection logic in `rllib/train.py` has been replaced to make use of the `detect_reporter` function for consistency with Tune (the sign in the overwrite condition was similarly flipped).
When this line tries to format the task into the string, it also attempts to format all of the serialized arguments passed to the task, which can be memory intensive, even if the debug log never gets displayed. Switch to only logging the task name, type and payload_id.
Repro script if you want to see how big a difference commenting out the debug log makes (takes up about 8GiB swap on my machine):
```
import ray
import numpy as np
import logging
ray.init("ray://localhost:10001")
@ray.remote
def run_ray_remote(np_array):
return np_array.shape
a = np.random.random((1024, 1024, 128)) # approx 1GiB
b = run_ray_remote.remote(a)
c = ray.get(b)
print(c)
```
This PR adds a utility script to automatically fetch release test results from the Buildkite pipeline for a release branch. This was previously a manual process.
Found many log messages about Not enough memory to create requested object ... when running shuffle tests, even when object store memory is far from full.
It seems when ObjectBufferPool::AbortCreate() is called, Raylet logs Not enough memory to create requested object .... However, ObjectBufferPool::AbortCreate() is called under 3 different codepaths:
ObjectManager::ReceiveObjectChunk()
PullManager::UpdatePullsBasedOnAvailableMemory() -> cancel_pull_request_
PullManager::CancelPull() -> cancel_pull_request_
Only codepath (2) is due to having not enough object store memory. So the logging in ObjectBufferPool::AbortCreate() is moved to the callsites instead, which have more context of the situation and can log with more accurate messages.
Also change codepath (3) to be DEBUG, because it is an expected behavior and can be quite spammy when running shuffle / sort workload.
Although there's enough quota, it is possible the AWS doesn't have enough capacity to start up new nodes. According to @allenyin55, the current wait for node timeout is too short. This PR increases the timeout to 3000 seconds (50 minutes) from 600 seconds. Let's see if this can resolve the issue. If it makes things worse, I will revert it quickly (I will closely monitor the infra failure rate)