Logs a warning when a user sets max_workers for local node provider less than the number of available ips.
Also removes defaults of 0 for min_workers and max_workers from example configs to help prevent users inadvertantly setting max_workers=0 again.
Currently job drivers cannot use GPUs due to `CUDA_VISIBLE_DEVICES` being set (no resource request for job driver's supervisor actor). This is a regression from `ray submit`.
This is a temporary workaround -- in the future we should support a resource request for the job supervisor actor.
Especially when using datasets, we sometimes run into very long string representations. Tune should make sure to cut these according to a specified maximum length.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
This PR includes the changes from #24172
This PR adds an end-to-end training and serving example for the RLTrainer/RLPredictor. It also adds an `RLServeEnv` that can be used as an external env for rllib inference, querying the served policy from the RLPredictor.
This draft PR runs end to end, but I'd like to gather some initial feedback before promoting it to a full PR.
[User complains](https://discuss.ray.io/t/which-attributes-can-be-used-in-checkpoint-score-attr-when-using-tune-run/5826) about logging on failure of locating `checkpoint_score_attr ` in results dict not being informative.
I propose that we log the actual results dict keys and extended stopping criteria, which imho should not log the whole result dict as this might contain tensors.
Maybe there are other similar cases in tune library, in which I don't know my way around that good.
This PR fixes a typo in the KubeRay example config in Ray's docs.
Specifics:
Ray versions in the Ray repo's example KubeRay CR were recently updated from 1.11.0 to 1.12.0.
However, the worker group's Ray version was accidentally left at 1.11.0. This leads to alarming crash-looping when deploying the example in the docs.
This PR matches up the Ray images by setting the worker group to rayproject/ray:1.12.0.
* fix init() requires hardcoded storage path when connecting to existing cluster
* update tests with new init(storage) behavior
* update tests with latest api behavior
Make sure users can read csv with columns types specified.
Users may want to do this because sometimes PyArrow's type inference doesn't work as intended, in which case users can step in and work around the type inference.
Adds a from_huggingface method to Datasets, which allows the conversion of a Hugging Face Dataset to a Ray Dataset. As a Hugging Face Dataset is backed by an Arrow table, the conversion is trivial.
The documentation says that @ray.remote can take fractional num_gpus which is true, but the documentation lists it as an integer. I think this is strictly a problem in the docs.
This PR
- adds an example on how to run Ray Train and log results to weights & biases
- adds functionality to the W&B plugin to store checkpoints
- fixes a bug introduced in #24017
- Adds a CI utility script to setup credentials
- Adds a CI utility script to remove test state from external services cc @simon-mo
Improves Tune Jupyter notebook experience by modifying the `JupyterNotebookReporter` in two ways:
* Previously, the `overwrite` flag controlled whether the entire cell would be overwritten with the updated table. This caused all the other logs to be cleared. Now, we use IPython display handle functionality to create a table at the top of the cell and update only that, preserving the rest of the output. The `overwrite` flag now controls whether the cell output *prior* to the initialization of `JupyterNotebookReporter` is overwritten or not.
* The Ray Client detection was not working unless the user specifically passed a `JupyterNotebookReporter` as the `progress_reporter`. Now, the default value allows for correct detection of the enviroment while running Ray Client.
Furthermore, the progress reporter detection logic in `rllib/train.py` has been replaced to make use of the `detect_reporter` function for consistency with Tune (the sign in the overwrite condition was similarly flipped).
When this line tries to format the task into the string, it also attempts to format all of the serialized arguments passed to the task, which can be memory intensive, even if the debug log never gets displayed. Switch to only logging the task name, type and payload_id.
Repro script if you want to see how big a difference commenting out the debug log makes (takes up about 8GiB swap on my machine):
```
import ray
import numpy as np
import logging
ray.init("ray://localhost:10001")
@ray.remote
def run_ray_remote(np_array):
return np_array.shape
a = np.random.random((1024, 1024, 128)) # approx 1GiB
b = run_ray_remote.remote(a)
c = ray.get(b)
print(c)
```
#24421 increased the default maximum GRPC limit to 250MB, which broke a Tune test that catches too large training functions.
This PR fixes this test by increasing the size of the experiment. However, please note that this leads to an inconsistency: For training functions of size 100 < fn < 250, an error will be raised only at runtime when trying to start the actor:
```
ValueError: The actor ImplicitFunc is too large (125 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
```
But it will successfully pass the registration stage `self._run_identifier = Experiment.register_if_needed(run)`.
cc @ericl should we set the default limit back to 100 MB (or maybe set the FUNCTION_SIZE_ERROR_THRESHOLD to 250 or whatever the GRPC limit is?)
The test is flaky because we schedule g task without waiting for f task to complete (because f_obj is embedded inside a list) so we may not have the locality information for f_obj from owner during g task scheduling.
Related issue number
Closes#23964
test_usage_stats was very flaky due to a runtime env setup error.
The test defined the runtime env {pip: "ray[serve]"} simultaneously in ray.init() and also in ray.remote() for the actor. This is redundant but should be supported by runtime_env, but it turns out it reveals a bug in runtime_env. The env appears to be installed twice concurrently in this situation, causing flakiness.
I'll make a followup issue for the runtime env bug with more details and a simpler repro, and link it here. Until then, we should merge this PR to deflake CI. This PR only defines the runtime_env in ray.init(), and removes the redefinition in ray.remote(). The actor will still inherit the correct runtime environment.
I tested manually by inspecting dashboard_agent.log locally. The virtualenv install commands were duplicated about 75% of the time before this PR, indicating the concurrent install. But with this change, the commands were never duplicated in the 7-8 times that I ran it. So this PR should deflake the test.
The link to the documentation to troubleshoot TypeError: Could not serialize the function ... is broken. This PR fixes the link by replacing it with the correct URL.
For better debugging, we should print the installed pip packages in the buildkite annotations. Additionally, shorten the summary message to make the output less cluttered.
All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions.
To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
The previous implementation of the reporting logic in HuggingFaceTrainer had a few edge cases that caused the training iterations and measured epochs to diverge. This new implementation should ensure that reporting is consistent.
On the ServeHead level, it is talking to serve api and controller to do deployment and clean up now. With this pr, it hides the deployment clean up logic into server.run() for code cleanness and easy to refactor in the future.