This adds "environments" to the release package that can be used to configure some environment variables. These variables will be loaded either by an `--env` argument or a `env` definition in the test definition and can be used to e.g. run release tests on staging.
## Why are these changes needed?
This PR fixes the issue where --follow lost connection when it is used for > 30 seconds because the gRPC timeout is configured to be 30 seconds, and we don't reset it when --follow is set.
This fixes the issue by setting timeout=None when keepalive==True
## Related issue number
Closes https://github.com/ray-project/ray/issues/25721
## Why are these changes needed?
This PR implements `!=` predicate for filtering. As a result of this PR, two APIs are changed.
```
--filter key value -> --filter "key=val" or ---filter "key!=val"
list_actors(filters=[(key, val), (key2, val2)]) -> list_actors(filters=[(key, "=", val), (key2, "=", val2)])
```
There are mysterious memory usage growth in Ray clusters that disappear when running with jemalloc. Before we are able to figure out the root cause, it seems using jemalloc by default can be a good walkaround. Because of its efficiency, using jemalloc by default can be beneficial, but we need to run more benchmarks to verify.
Allow you start actors in different namespace instead of the driver namespace.
Usage is simple:
```java
Ray.init(namespace="a");
/// Named actor a will starts in namespace `b`
ActorHandle<A> a = Ray.actor(A::new).setName("myActor", "b").remote();
```
Co-authored-by: Hao Chen <chenh1024@gmail.com>
## Why are these changes needed?
This is a first implementation of GET APIs for
nodes
actors
placement groups
workers
tasks
objects
E.g.
# CLI
(dev) ➜ ray git:(ricky/obs-get) ray get nodes cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
---
node_id: cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
node_ip: 172.31.47.143
node_name: 172.31.47.143
resources_total:
CPU: 8.0
memory: 16700517582.0
node:172.31.47.143: 1.0
object_store_memory: 8350258790.0
state: ALIVE
# Python
from ray.experimental.state.api import get_node
from ray.experimental.state.common import NodeState
node :NodeState = get_node(<id>)
print(node)
We currently do not support getting specific resources by id for 'jobs' and 'runtime-envs'
jobs: it is not exposing id to be queried easily yet
runtime envs: it doesn't have an id associated.
TODO:
it uses list endpoints + filtering as for now, future iterations will implement GET-specific endpoints and interaction with raylet/GCS with point query APIs.
Unit testing for state_manager for GET endpoints when implemented.
Getting jobs by id
This PR
Adds a warning about a known issue to the KubeRay section of the Ray docs.
Updates the description of the feature state of KubeRay integration.
Adds some links to the KubeRay docs.
Currently unqualified `conda install` is installing 1.44.0 whereas `ray` is requiring 1.43.0 in `pip install`, thus the instructions are cancelling themselves out and you end with an unusable installation due to no symbols for `grpcio` in ARM
Co-authored-by: Simon Mo <simon.mo@hey.com>
Why are these changes needed?
This is to address false alarms on subprocesses exiting when killed by ray stop with SIGTERM.
What has been changed?
Added signal handlers for some of the subprocesses:
dashboard (head)
log monitor
ray client server
Changed the --block semantics and prompt messages.
Related issue number
Closes#25518
Now we can run custom java tests by:
0. `cp testng_custom_template.xml testng_custom.xml`
1. Specify test class/method in `testng_custom.xml`
2. `bazel test //java:custom_test --test_output=streamed`
detached java actor is not working(actor will be dead after driver exit) when creating a java actor with ActorLifetime.DETACHED option
Co-authored-by: sunkunjian1 <sunkunjian1@jd.com>
Closes#25283.
The dashboard shows inaccurate memory and cpu data when run inside of a docker container, in particular when using cgroups v2. This PR fixes that.
Uses a Monitor attribute in the shutdown handler instead of an args attribute. Necessary because some integrations (including KubeRay) instantiate the Monitor directly rather than running python Monitor.py with arguments.
Adds HTTP retries to Ray CR fetch. Necessary for robustness because Ray CR fetch exceptions are not currently handled during autoscaler initialization.
This PR renames the `suggest` package to `search` and alters the layout slightly.
In the new package, the higher-level abstractions are on the top level and the search algorithms have their own subdirectories.
In a future refactor, we can turn algorithms such as PBT into actual `SearchAlgorithm` classes and move them into the `search` package.
The main reason to keep algorithms and searchers in the same directory is to avoid user confusion - for a user, `Bayesopt` is as much a search algorithm as e.g. `PBT`, so it doesn't make sense to split them up.
Remove base dir : 'python/ray/*.py' from the isort blacklist. This is needed so it will run isort on subdirectories under python/ray, and allow us to start enabling isort for subdirectories
rkooo567
Member
rkooo567 commented 2 days ago
Why are these changes needed?
Fixes the check failure;
| 2022-06-21 19:14:10,718 WARNING worker.py:1737 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff7cc1d49b6d4812ea954ca19a01000000 Worker ID: 9fb0f63d84689c6a9e5257309a6346170c827aa7f970c0ee45e79a8b Node ID: 2d493b4f39f0c382a5dc28137ba73af78b0327696117e9981bd2425c Worker IP address: 172.18.0.3 Worker port: 35883 Worker PID: 31945 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
| (HTTPProxyActor pid=31945) [2022-06-21 19:14:10,710 C 31945 31971] pb_util.h:202: Check failed: death_cause.context_case() == ContextCase::kActorDiedErrorContext
| (HTTPProxyActor pid=31945) *** StackTrace Information ***
| (HTTPProxyActor pid=31945) ray::SpdLogMessage::Flush()
| (HTTPProxyActor pid=31945) ray::RayLog::~RayLog()
| (HTTPProxyActor pid=31945) ray::core::CoreWorker::HandleKillActor()
| (HTTPProxyActor pid=31945) std::_Function_handler<>::_M_invoke()
| (HTTPProxyActor pid=31945) EventTracker::RecordExecution()
| (HTTPProxyActor pid=31945) std::_Function_handler<>::_M_invoke()
| (HTTPProxyActor pid=31945) boost::asio::detail::completion_handler<>::do_complete()
| (HTTPProxyActor pid=31945) boost::asio::detail::scheduler::do_run_one()
| (HTTPProxyActor pid=31945) boost::asio::detail::scheduler::run()
| (HTTPProxyActor pid=31945) boost::asio::io_context::run()
| (HTTPProxyActor pid=31945) ray::core::CoreWorker::RunIOService()
| (HTTPProxyActor pid=31945) execute_native_thread_routine
| (HTTPProxyActor pid=31945)
| (HTTPProxyActor pid=31982) INFO: Started server process [31982]
NOTE: This is a temporary fix. The root cause is that there's a path that doesn't properly report the death cause (when this RPC is triggered by gcs_actor_scheduler). This should be addressed separately to improve exit observability.
Since this is intended to be picked for 1.13.1, I only added the minimal fix.
This PR records the historical Ray native library usage to the home temp folder. Note that library usage only includes Ray native libraries (rllib, tune, dataset, workflow, and train). NOTE: The library usage is always recorded to /tmp/ray, but they will only recorded when the cluster that enables the usage stats is running. Note that this can generate quite big amount of false positive (e.g., If I import rllib once, and start cluster for local development, they will all considered as a rllib cluster).