The KerasCallback saves the model checkpoint as a file. However, for the saved checkpoint to work with TensorflowPredictor, the model weights needs to be saved under the MODEL_KEY in a dict format.
If a ray cluster does not have enough resources for a serve deployment, the deployment will be stuck at `updating` status. This change will set the `message` field when allocations/initializations of actors have been pending for too long.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
## Why are these changes needed?
This PR fixes the issue where --follow lost connection when it is used for > 30 seconds because the gRPC timeout is configured to be 30 seconds, and we don't reset it when --follow is set.
This fixes the issue by setting timeout=None when keepalive==True
## Related issue number
Closes https://github.com/ray-project/ray/issues/25721
## Why are these changes needed?
This PR implements `!=` predicate for filtering. As a result of this PR, two APIs are changed.
```
--filter key value -> --filter "key=val" or ---filter "key!=val"
list_actors(filters=[(key, val), (key2, val2)]) -> list_actors(filters=[(key, "=", val), (key2, "=", val2)])
```
## Why are these changes needed?
This is a first implementation of GET APIs for
nodes
actors
placement groups
workers
tasks
objects
E.g.
# CLI
(dev) ➜ ray git:(ricky/obs-get) ray get nodes cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
---
node_id: cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
node_ip: 172.31.47.143
node_name: 172.31.47.143
resources_total:
CPU: 8.0
memory: 16700517582.0
node:172.31.47.143: 1.0
object_store_memory: 8350258790.0
state: ALIVE
# Python
from ray.experimental.state.api import get_node
from ray.experimental.state.common import NodeState
node :NodeState = get_node(<id>)
print(node)
We currently do not support getting specific resources by id for 'jobs' and 'runtime-envs'
jobs: it is not exposing id to be queried easily yet
runtime envs: it doesn't have an id associated.
TODO:
it uses list endpoints + filtering as for now, future iterations will implement GET-specific endpoints and interaction with raylet/GCS with point query APIs.
Unit testing for state_manager for GET endpoints when implemented.
Getting jobs by id
Why are these changes needed?
This is to address false alarms on subprocesses exiting when killed by ray stop with SIGTERM.
What has been changed?
Added signal handlers for some of the subprocesses:
dashboard (head)
log monitor
ray client server
Changed the --block semantics and prompt messages.
Related issue number
Closes#25518
Closes#25283.
The dashboard shows inaccurate memory and cpu data when run inside of a docker container, in particular when using cgroups v2. This PR fixes that.
Uses a Monitor attribute in the shutdown handler instead of an args attribute. Necessary because some integrations (including KubeRay) instantiate the Monitor directly rather than running python Monitor.py with arguments.
Adds HTTP retries to Ray CR fetch. Necessary for robustness because Ray CR fetch exceptions are not currently handled during autoscaler initialization.
This PR renames the `suggest` package to `search` and alters the layout slightly.
In the new package, the higher-level abstractions are on the top level and the search algorithms have their own subdirectories.
In a future refactor, we can turn algorithms such as PBT into actual `SearchAlgorithm` classes and move them into the `search` package.
The main reason to keep algorithms and searchers in the same directory is to avoid user confusion - for a user, `Bayesopt` is as much a search algorithm as e.g. `PBT`, so it doesn't make sense to split them up.
Remove base dir : 'python/ray/*.py' from the isort blacklist. This is needed so it will run isort on subdirectories under python/ray, and allow us to start enabling isort for subdirectories
This PR records the historical Ray native library usage to the home temp folder. Note that library usage only includes Ray native libraries (rllib, tune, dataset, workflow, and train). NOTE: The library usage is always recorded to /tmp/ray, but they will only recorded when the cluster that enables the usage stats is running. Note that this can generate quite big amount of false positive (e.g., If I import rllib once, and start cluster for local development, they will all considered as a rllib cluster).
We leak memory when we create a DatasetPipeline from a "collapsed" DatasetPipeline (which executes multiple stages and produces a stream of output Datasets as the input base Datasets for the new DatasetPipeline).
The DatasetPipeline splitting is such a collapsing operation, so the child pipelines will have zero stages (no matter how many stages the parent pipeline had), which will make us no longer able to tell if it's safe to clear output blocks of child pipelines.
This PR fixes this by preserving whether the base Datasets can be cleared when we create new DatasetPipeline from the old.
Experiment, Trial, and config parsing moves into an `experiment` package.
Notably, the new public facing APIs will be
```
from ray.tune.experiment import Experiment
from ray.tune.experiment import Trial
```
Users often have UDFs that require arguments only known at Dataset creation time, and sometimes these arguments may be large objects better suited for shared zero-copy access via the object store. One example is batch inference on a large on-CPU model using the actor pool strategy: without putting the large model into the object store and sharing across actors on the same node, each actor worker will need its own copy of the model.
Unfortunately, we can't rely on closing over these object refs, since the object ref will then be serialized in the exported function/class definition, causing the object to be indefinitely pinned and therefore leaked. It's much cleaner to instead link these object refs in as actor creation and task arguments. This PR adds support for threading such object refs through as actor creation and task arguments and supplying the concrete values to the UDFs.
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because now, the request will be queued and never got a response. This PR fixed it.
For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.
We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
Task/actor/object summary
Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
Uses the async KV API for downloading in the runtime env agent. This avoids the complexity of running the runtime env creation functions in a separate thread.
Some functions are still sync, including the working_dir/py_modules upload, installing wheels, and possibly others.
This PR adds a format-based file extension path filter for file-based datasources, and sets it as the default path filter. This will allow users to point the read_{format}() API at directories containing a mixture of files, and ensure that only files of the appropriate type are read. This default filter can still be disabled via ray.data.read_csv(..., partition_filter=None).