We explicitely disallow scheduling tasks based on object store memory, so we should state that in the docs
cc @scottsun94
```
>>> import ray
>>> @ray.remote(object_store_memory=100)
... def foo():
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/anyscale/ray/python/ray/worker.py", line 2479, in _make_remote
ray_option_utils.validate_task_options(options, in_options=False)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 191, in validate_task_options
task_options[k].validate(k, v)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 33, in validate
raise ValueError(self.error_message_for_value_constraint)
ValueError: Setting 'object_store_memory' is not implemented for tasks
```
Co-authored-by: Alex Wu <alex@anyscale.com>
## Why are these changes needed?
This PR adds data truncation when there are more than N number of entries. The policy is as follow;
By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.
By default, all internal RPCs truncate data if it's > 10K.
For distributed sources, we query each source with 10K limit and we apply limit again at the end.
## Related issue number
Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
As the integration logging callbacks are commonly used with AIR Trainers, they should be moved from the tune package to the air package. The old imports will still work, but raise a deprecation warning.
The KerasCallback saves the model checkpoint as a file. However, for the saved checkpoint to work with TensorflowPredictor, the model weights needs to be saved under the MODEL_KEY in a dict format.
If a ray cluster does not have enough resources for a serve deployment, the deployment will be stuck at `updating` status. This change will set the `message` field when allocations/initializations of actors have been pending for too long.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
This adds "environments" to the release package that can be used to configure some environment variables. These variables will be loaded either by an `--env` argument or a `env` definition in the test definition and can be used to e.g. run release tests on staging.
## Why are these changes needed?
This PR fixes the issue where --follow lost connection when it is used for > 30 seconds because the gRPC timeout is configured to be 30 seconds, and we don't reset it when --follow is set.
This fixes the issue by setting timeout=None when keepalive==True
## Related issue number
Closes https://github.com/ray-project/ray/issues/25721
## Why are these changes needed?
This PR implements `!=` predicate for filtering. As a result of this PR, two APIs are changed.
```
--filter key value -> --filter "key=val" or ---filter "key!=val"
list_actors(filters=[(key, val), (key2, val2)]) -> list_actors(filters=[(key, "=", val), (key2, "=", val2)])
```
There are mysterious memory usage growth in Ray clusters that disappear when running with jemalloc. Before we are able to figure out the root cause, it seems using jemalloc by default can be a good walkaround. Because of its efficiency, using jemalloc by default can be beneficial, but we need to run more benchmarks to verify.
Allow you start actors in different namespace instead of the driver namespace.
Usage is simple:
```java
Ray.init(namespace="a");
/// Named actor a will starts in namespace `b`
ActorHandle<A> a = Ray.actor(A::new).setName("myActor", "b").remote();
```
Co-authored-by: Hao Chen <chenh1024@gmail.com>
## Why are these changes needed?
This is a first implementation of GET APIs for
nodes
actors
placement groups
workers
tasks
objects
E.g.
# CLI
(dev) ➜ ray git:(ricky/obs-get) ray get nodes cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
---
node_id: cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
node_ip: 172.31.47.143
node_name: 172.31.47.143
resources_total:
CPU: 8.0
memory: 16700517582.0
node:172.31.47.143: 1.0
object_store_memory: 8350258790.0
state: ALIVE
# Python
from ray.experimental.state.api import get_node
from ray.experimental.state.common import NodeState
node :NodeState = get_node(<id>)
print(node)
We currently do not support getting specific resources by id for 'jobs' and 'runtime-envs'
jobs: it is not exposing id to be queried easily yet
runtime envs: it doesn't have an id associated.
TODO:
it uses list endpoints + filtering as for now, future iterations will implement GET-specific endpoints and interaction with raylet/GCS with point query APIs.
Unit testing for state_manager for GET endpoints when implemented.
Getting jobs by id
This PR
Adds a warning about a known issue to the KubeRay section of the Ray docs.
Updates the description of the feature state of KubeRay integration.
Adds some links to the KubeRay docs.
Currently unqualified `conda install` is installing 1.44.0 whereas `ray` is requiring 1.43.0 in `pip install`, thus the instructions are cancelling themselves out and you end with an unusable installation due to no symbols for `grpcio` in ARM
Co-authored-by: Simon Mo <simon.mo@hey.com>
Why are these changes needed?
This is to address false alarms on subprocesses exiting when killed by ray stop with SIGTERM.
What has been changed?
Added signal handlers for some of the subprocesses:
dashboard (head)
log monitor
ray client server
Changed the --block semantics and prompt messages.
Related issue number
Closes#25518
Now we can run custom java tests by:
0. `cp testng_custom_template.xml testng_custom.xml`
1. Specify test class/method in `testng_custom.xml`
2. `bazel test //java:custom_test --test_output=streamed`
detached java actor is not working(actor will be dead after driver exit) when creating a java actor with ActorLifetime.DETACHED option
Co-authored-by: sunkunjian1 <sunkunjian1@jd.com>
Closes#25283.
The dashboard shows inaccurate memory and cpu data when run inside of a docker container, in particular when using cgroups v2. This PR fixes that.
Uses a Monitor attribute in the shutdown handler instead of an args attribute. Necessary because some integrations (including KubeRay) instantiate the Monitor directly rather than running python Monitor.py with arguments.
Adds HTTP retries to Ray CR fetch. Necessary for robustness because Ray CR fetch exceptions are not currently handled during autoscaler initialization.
This PR renames the `suggest` package to `search` and alters the layout slightly.
In the new package, the higher-level abstractions are on the top level and the search algorithms have their own subdirectories.
In a future refactor, we can turn algorithms such as PBT into actual `SearchAlgorithm` classes and move them into the `search` package.
The main reason to keep algorithms and searchers in the same directory is to avoid user confusion - for a user, `Bayesopt` is as much a search algorithm as e.g. `PBT`, so it doesn't make sense to split them up.
Remove base dir : 'python/ray/*.py' from the isort blacklist. This is needed so it will run isort on subdirectories under python/ray, and allow us to start enabling isort for subdirectories