rkooo567
Member
rkooo567 commented 2 days ago
Why are these changes needed?
Fixes the check failure;
| 2022-06-21 19:14:10,718 WARNING worker.py:1737 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff7cc1d49b6d4812ea954ca19a01000000 Worker ID: 9fb0f63d84689c6a9e5257309a6346170c827aa7f970c0ee45e79a8b Node ID: 2d493b4f39f0c382a5dc28137ba73af78b0327696117e9981bd2425c Worker IP address: 172.18.0.3 Worker port: 35883 Worker PID: 31945 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
| (HTTPProxyActor pid=31945) [2022-06-21 19:14:10,710 C 31945 31971] pb_util.h:202: Check failed: death_cause.context_case() == ContextCase::kActorDiedErrorContext
| (HTTPProxyActor pid=31945) *** StackTrace Information ***
| (HTTPProxyActor pid=31945) ray::SpdLogMessage::Flush()
| (HTTPProxyActor pid=31945) ray::RayLog::~RayLog()
| (HTTPProxyActor pid=31945) ray::core::CoreWorker::HandleKillActor()
| (HTTPProxyActor pid=31945) std::_Function_handler<>::_M_invoke()
| (HTTPProxyActor pid=31945) EventTracker::RecordExecution()
| (HTTPProxyActor pid=31945) std::_Function_handler<>::_M_invoke()
| (HTTPProxyActor pid=31945) boost::asio::detail::completion_handler<>::do_complete()
| (HTTPProxyActor pid=31945) boost::asio::detail::scheduler::do_run_one()
| (HTTPProxyActor pid=31945) boost::asio::detail::scheduler::run()
| (HTTPProxyActor pid=31945) boost::asio::io_context::run()
| (HTTPProxyActor pid=31945) ray::core::CoreWorker::RunIOService()
| (HTTPProxyActor pid=31945) execute_native_thread_routine
| (HTTPProxyActor pid=31945)
| (HTTPProxyActor pid=31982) INFO: Started server process [31982]
NOTE: This is a temporary fix. The root cause is that there's a path that doesn't properly report the death cause (when this RPC is triggered by gcs_actor_scheduler). This should be addressed separately to improve exit observability.
Since this is intended to be picked for 1.13.1, I only added the minimal fix.
This PR records the historical Ray native library usage to the home temp folder. Note that library usage only includes Ray native libraries (rllib, tune, dataset, workflow, and train). NOTE: The library usage is always recorded to /tmp/ray, but they will only recorded when the cluster that enables the usage stats is running. Note that this can generate quite big amount of false positive (e.g., If I import rllib once, and start cluster for local development, they will all considered as a rllib cluster).
We leak memory when we create a DatasetPipeline from a "collapsed" DatasetPipeline (which executes multiple stages and produces a stream of output Datasets as the input base Datasets for the new DatasetPipeline).
The DatasetPipeline splitting is such a collapsing operation, so the child pipelines will have zero stages (no matter how many stages the parent pipeline had), which will make us no longer able to tell if it's safe to clear output blocks of child pipelines.
This PR fixes this by preserving whether the base Datasets can be cleared when we create new DatasetPipeline from the old.
Experiment, Trial, and config parsing moves into an `experiment` package.
Notably, the new public facing APIs will be
```
from ray.tune.experiment import Experiment
from ray.tune.experiment import Trial
```
Users often have UDFs that require arguments only known at Dataset creation time, and sometimes these arguments may be large objects better suited for shared zero-copy access via the object store. One example is batch inference on a large on-CPU model using the actor pool strategy: without putting the large model into the object store and sharing across actors on the same node, each actor worker will need its own copy of the model.
Unfortunately, we can't rely on closing over these object refs, since the object ref will then be serialized in the exported function/class definition, causing the object to be indefinitely pinned and therefore leaked. It's much cleaner to instead link these object refs in as actor creation and task arguments. This PR adds support for threading such object refs through as actor creation and task arguments and supplying the concrete values to the UDFs.
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because now, the request will be queued and never got a response. This PR fixed it.
For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
- new section of doc for autoscaling (introduction of serve autoscaling and config parameter)
- Remove the version requirement note inside the doc
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.
We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
Task/actor/object summary
Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
Uses the async KV API for downloading in the runtime env agent. This avoids the complexity of running the runtime env creation functions in a separate thread.
Some functions are still sync, including the working_dir/py_modules upload, installing wheels, and possibly others.