We leak memory when we create a DatasetPipeline from a "collapsed" DatasetPipeline (which executes multiple stages and produces a stream of output Datasets as the input base Datasets for the new DatasetPipeline).
The DatasetPipeline splitting is such a collapsing operation, so the child pipelines will have zero stages (no matter how many stages the parent pipeline had), which will make us no longer able to tell if it's safe to clear output blocks of child pipelines.
This PR fixes this by preserving whether the base Datasets can be cleared when we create new DatasetPipeline from the old.
Experiment, Trial, and config parsing moves into an `experiment` package.
Notably, the new public facing APIs will be
```
from ray.tune.experiment import Experiment
from ray.tune.experiment import Trial
```
Users often have UDFs that require arguments only known at Dataset creation time, and sometimes these arguments may be large objects better suited for shared zero-copy access via the object store. One example is batch inference on a large on-CPU model using the actor pool strategy: without putting the large model into the object store and sharing across actors on the same node, each actor worker will need its own copy of the model.
Unfortunately, we can't rely on closing over these object refs, since the object ref will then be serialized in the exported function/class definition, causing the object to be indefinitely pinned and therefore leaked. It's much cleaner to instead link these object refs in as actor creation and task arguments. This PR adds support for threading such object refs through as actor creation and task arguments and supplying the concrete values to the UDFs.
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because now, the request will be queued and never got a response. This PR fixed it.
For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
- new section of doc for autoscaling (introduction of serve autoscaling and config parameter)
- Remove the version requirement note inside the doc
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.
We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
Task/actor/object summary
Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
Uses the async KV API for downloading in the runtime env agent. This avoids the complexity of running the runtime env creation functions in a separate thread.
Some functions are still sync, including the working_dir/py_modules upload, installing wheels, and possibly others.
This PR adds a format-based file extension path filter for file-based datasources, and sets it as the default path filter. This will allow users to point the read_{format}() API at directories containing a mixture of files, and ensure that only files of the appropriate type are read. This default filter can still be disabled via ray.data.read_csv(..., partition_filter=None).
Content of the two docs were switched.
Unnecessary Ray Get images were correctly in `unnecessary-ray-get.rst`, which made this noticeable beyond the URL.
This allows correct logging of tuple entries in configs, e.g. PolicySpec (which is a namedtuple) from multiagent.policies key. Without this, the whole PolicySpec is serialized as a string, which doesn't allow to filter run by specific key from this tuple.
I’d like to propose a bit changes to the API. Currently we are returning the dict of ID -> value mapping when the list API is returned. But I am thinking to change this to a list because the sort will become ineffective if we return the dictionary. So, it’s ideal we use the list to keep the order (it’s important for deterministic order)
Also, for some APIs, each entry doesn’t have a unique id. For example, list objects will have duplicated object IDs from their entries, which is not working with dict return type (e.g., there can be more than 1 Object ID entry if the object is locally referenced & borrowed by task/pinned in memory)
Also, users can easily build dict index on their own if it is necessary.
It is often a bit challenging to get the full documentation to build (there are external packages that can make this challenging). This changes the instructions to treat warnings as warnings and not errors, which should improve the workflow.
`make develop` is the same as `make html` except it doesn't treat warnings as errors.