When this line tries to format the task into the string, it also attempts to format all of the serialized arguments passed to the task, which can be memory intensive, even if the debug log never gets displayed. Switch to only logging the task name, type and payload_id.
Repro script if you want to see how big a difference commenting out the debug log makes (takes up about 8GiB swap on my machine):
```
import ray
import numpy as np
import logging
ray.init("ray://localhost:10001")
@ray.remote
def run_ray_remote(np_array):
return np_array.shape
a = np.random.random((1024, 1024, 128)) # approx 1GiB
b = run_ray_remote.remote(a)
c = ray.get(b)
print(c)
```
#24421 increased the default maximum GRPC limit to 250MB, which broke a Tune test that catches too large training functions.
This PR fixes this test by increasing the size of the experiment. However, please note that this leads to an inconsistency: For training functions of size 100 < fn < 250, an error will be raised only at runtime when trying to start the actor:
```
ValueError: The actor ImplicitFunc is too large (125 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
```
But it will successfully pass the registration stage `self._run_identifier = Experiment.register_if_needed(run)`.
cc @ericl should we set the default limit back to 100 MB (or maybe set the FUNCTION_SIZE_ERROR_THRESHOLD to 250 or whatever the GRPC limit is?)
The test is flaky because we schedule g task without waiting for f task to complete (because f_obj is embedded inside a list) so we may not have the locality information for f_obj from owner during g task scheduling.
Related issue number
Closes#23964
test_usage_stats was very flaky due to a runtime env setup error.
The test defined the runtime env {pip: "ray[serve]"} simultaneously in ray.init() and also in ray.remote() for the actor. This is redundant but should be supported by runtime_env, but it turns out it reveals a bug in runtime_env. The env appears to be installed twice concurrently in this situation, causing flakiness.
I'll make a followup issue for the runtime env bug with more details and a simpler repro, and link it here. Until then, we should merge this PR to deflake CI. This PR only defines the runtime_env in ray.init(), and removes the redefinition in ray.remote(). The actor will still inherit the correct runtime environment.
I tested manually by inspecting dashboard_agent.log locally. The virtualenv install commands were duplicated about 75% of the time before this PR, indicating the concurrent install. But with this change, the commands were never duplicated in the 7-8 times that I ran it. So this PR should deflake the test.
The link to the documentation to troubleshoot TypeError: Could not serialize the function ... is broken. This PR fixes the link by replacing it with the correct URL.
For better debugging, we should print the installed pip packages in the buildkite annotations. Additionally, shorten the summary message to make the output less cluttered.
All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions.
To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
The previous implementation of the reporting logic in HuggingFaceTrainer had a few edge cases that caused the training iterations and measured epochs to diverge. This new implementation should ensure that reporting is consistent.
On the ServeHead level, it is talking to serve api and controller to do deployment and clean up now. With this pr, it hides the deployment clean up logic into server.run() for code cleanness and easy to refactor in the future.
This PR modifies the KubeRay e2e autoscaling test so that one of its scaling commands is sent via the Ray Job Submission API.
This validates that the Ray Job Submission API works with KubeRay and, in particular, that the Ray Dashboard is correctly exposed.
Updating W&B Ray Tune Integration with new standards. Adding support to wandb service, the soon to be default way for multiprocessing + wandb run logging.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
This PR allows for Ray to disable metrics collection. It was possible with RAY_enable_metrics_collection, but it didn't fully disable collection because there was a metrics collection happening from agent that wasn't properly disabled. This PR also adds tests.
Users get error messages from client/server on actor failures, even if they already try-except'd the error. For example:
```
import ray
ray.init("ray://localhost:10001")
try:
ray.get_actor("doesnotexist")
except ValueError:
pass
```
Will still generate the log `Caught schedule exception` and `Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.`. Reduce the level of these logs to debug by default.
Fixes checkpoints not being recorded in Tune's checkpoint manager if the first checkpoint has None value. This also deflakes `test_checkpoint_manager.py::CheckpointManagerTest`.
**TL;DR:** Don't clear for eager, clear all but non-lazy input blocks if lazy, clear everything if pipelining.
This PR provides more efficient and intuitive block clearing semantics for eager mode, lazy mode, and pipelining, while still supporting multiple operations applied to the same base dataset, i.e. fan-out. For example, two different map operations are applied to the same base `ds` in this example:
```python
ds = ray.data.range(10).map(lambda x: x+1)
ds1 = ds.map(lambda x: 2*x)
ds2 = ds.map(lambda x: 3*x)
```
If naively clear the blocks when executing the map to produce `ds1`, the map producing `ds2` will fail.
### Desired Semantics
- **Eager mode** - don’t clear input blocks, thereby supporting fan-out from cached data at any point in the stage chain without triggering unexpected recomputation.
- **Lazy mode** - if lazy datasource, clear the input blocks for every stage, relying on recomputing via stage lineage if fan-out occurs; if non-lazy datasource, do not clear source blocks for execution plan when executing first stage, but do clear input blocks for every subsequent stage.
- **Pipelines** - Same as lazy mode, although the only fan-out that can occur is from the pipeline source blocks when repeating a dataset/pipeline, so unintended intermediate recomputation will never happen.
#17581 introduced a warning about excess queuing for actors. Unfortunately since Ray 1.10.0, the metric used became wrong for async actors, resulting in bogus warnings when they are called more than 5000 times, even though there are not 5000 pending tasks.
The difference between 1.9.2 and 1.10.0 is that async actors tasks skip the queue in CoreWorkerClient::PushActorTask. However CoreWorkerClient::ClientProcessedUpToSeqno uses max_finished_seq_no_ which is never updated when the queue is skipped.
I think that a better metric for the amount of tasks that are pending submissions is the size of the internal queue CoreWorkerDirectActorTaskSubmitter::inflight_task_callbacks.
For debugging client environments, it is helpful to print the installed pip packages.
Additionally, a fix for the environment of the ml_user_tune_rllib_connect_test is added. Additionally, anyscale import errors are reported verbosely to help debug missing packages.
The prefetch_blocks implementation doesn't work as expected. Due to ray.wait() doesn't given us fine grained control, today we block waiting any of the block returns. As I read the code, it may or may not actually fetching all the blocks.
A better way to ensure prefetching not blocking is to use ray remote function call, which is not blocking and ensures the blocks are fetched eventually.
Adds a fast file metadata provider that trades comprehensive file metadata collection for speed of metadata collection, and which also disabled directory path expansion which can be very slow on some cloud storage service providers. This PR also refactors the Parquet datasource to be able to take advantage of both these changes and the content-type agnostic partitioning support from #23624.
This is the second PR of a series originally proposed in #23179.
Adds a Categorizer preprocessor to automatically set the Categorical dtype on a dataset. This is useful for eg. LightGBM, which has build-in support for features with that dtype.
Depends on #24144.
After https://github.com/ray-project/ray/pull/24066, some release tests are running into:
```
ModuleNotFoundError: No module named 'ray.train.impl'
```
This PR simply adds a `__init__.py` file to resolve this.
We also add a 5 wecond delay for client runners in release test to give clusters a bit of slack to come up (and avoid ray client connection errors)
#24311 added the `test_update_num_replicas_anonymous_namespace` unit test to check for replica leaks in anonymous namespaces. This change adds this test to the master branch.