Full context see https://github.com/ray-project/ray/issues/21791
pytest work for "some" environments for this test and on CI master, but this decorator is still unnecessary and was introduced by mistake. So just remove it and see what happens with the original issue.
Support the ability to specify a default lifetime for actors which are not specified lifetime when creating. This is a job level configuration item.
#### API Change
The Python API looks like:
```python
ray.init(job_config=JobConfig(default_actor_lifetime="detached"))
```
Java API looks like:
```java
System.setProperty("ray.job.default-actor-lifetime", defaultActorLifetime.name());
Ray.init();
```
One example usage is:
```python
ray.init(job_config=JobConfig(default_actor_lifetime="detached"))
a1 = A.options(lifetime="non_detached").remote() # a1 is a non-detached actor.
a2 = A.remote() # a2 is a non-detached actor.
```
Co-authored-by: Kai Yang <kfstorm@outlook.com>
Co-authored-by: Qing Wang <jovany.wq@antgroup.com>
By default, ~/ray_results/exp_name/trial_name/checkpoint_name.
Instead of the whole trial checkpoint (~/ray_results/exp_name/trial_name/) directory.
Stuff like progress.csv, result.json, params.pkl, params.json, events.out etc are coming from driver process.
This could also enable us to de-couple sync up and delete - they don't have to wait for each other to finish.
Currently, GCS KV client only has blocking API. Calling them from dashboard event loop can block other operations for many seconds, leading to failures such as taking too long (> 2min) to submit a job and making nightly tests fail (#21699). This PR offloads the blocking work to a separate thread. Implementing async GCS KV API will be done in the future.
This PR consolidates both #21667 and #21759 (look there for features), but improves on them in the following way:
- [x] we reverted renaming of existing projects `tune`, `rllib`, `train`, `cluster`, `serve`, `raysgd` and `data` so that links won't break. I think my consolidation efforts with the `ray-` prefix were a little overeager in that regard. It's better like this. Only the creation of `ray-core` was a necessity, and some files moved into the `rllib` folder, so that should be relatively benign.
- [x] Additionally, we added Algolia `docsearch`, screenshot below. This is _much_ better than our current search. Caveat: there's a sphinx dependency that needs to be replaced (`sphinx-tabs`) by another, newer one (`sphinx-panels`), as the former prevents loading of the `algolia.js` library. Will follow-up in the next PR (hoping this one doesn't get re-re-re-re-reverted).
Currently, the `unzip_package` function relies on `extract_file_and_remove_top_level_dir` to unzip and remove the top-level directory from archive working directories. However, `extract_file_and_remove_top_level_dir` uses `os.rename()` to remove the tld by manually unzipping each file from a zip file and moving it to the tld's parent. When the tld contains directories or files with the same name as the tld, `os.rename()` fails to move these files to the tld's parent because of the name collision between the file and the tld.
This change replaces `extract_file_and_remove_top_level_dir` with `remove_dir_from_filepaths`. Now, `unzip_package` unzips the entire zip file before `remove_dir_from_filepaths` moves all the tld's children to the tld's parent using `os.rename()`.
This edge case is tested in the new unit test `test_unzip_with_matching_subdirectory_names`. Additionally, `extract_file_and_remove_top_level_dir`'s unit test is replaced with `TestRemoveDirFromFilepaths`, which tests the new `remove_dir_from_filepaths` function.
`test_traceback.py` was taking ~55s to finish recently, and since today it starts to time out at 60s more frequently. All test cases do succeed so increase its test time out for now. We will look into if there is any performance regression separately.
This is the second last PR to improve `ActorDiedError` exception.
This propagates Actor death cause metadata to the ray error object. In this way, we can raise a better actor died error exception.
After this PR is merged, I will add more metadata to each error message and write a documentation that explains when each error happens.
TODO
- [x] Fix test failures
- [x] Add unit tests
- [x] Fix Java/cpp cases
Follow up PRs
- Not allowing nullptr for RayErrorInfo input.
This fixes the event stats segfault. I made a consistent repro script that can have 3~5 segfault per each `placement_group_mini_integration_test` and verified this fixes the issue by running it more than 5 times.
Note that when GCS HA is on, the same repro uncovered a different shutdown bug from the pubsub (resubscription seems to happen, and currently idempotency is not supported). I will make a separate issue so that @mwtian can handle it.
## Root cause
The problem was that we are calling `gcs_client_->Disconnect()` from the task execution event loop. Note that gcs_client_ uses io_service as its main event loop, meaning now Disconnect is called in a different thread.
If you look at Disconnect method, it resets lots of pointers. Resetting pointers of attributes that can be used by other threads -> segfault.
This fixes the issue by "not resetting pointers". We do this after we join the io service thread, cuz it can now guarantee the gcs client won't be used by other threads (resetting gcs_client_ is probably not necessary, but I added it as a safety check).
Note that gcs_client is currently written in very complicated way, and we need some refactoring to make thread-safety & code structure cleaner. But I will do this task once I have bandwidth.
To make other system or internal project reuse ray deps bazel function, we need change this local accessing style to global accessing with ray-project namespace.
Co-authored-by: 林濯 <lingxuzn.zlx@antgroup.com>
C++ API need to call python and java worker, this pr support call python worker. Call python worker is similar with call c++ worker, need to pass PyFunction, PyActorClass and PyActorMethod.
## call python normal task
```python
#test_cross_language_invocation.py
import ray
@ray.remote
def py_return_input(v):
return v
```
c++ api call python function
```c++
auto py_obj1 = ray::Task(ray::PyFunction</*ReturnType*/int>{/*module_name=*/"test_cross_language_invocation",
/*function_name=*/"py_return_input"})
.Remote(42);
EXPECT_EQ(42, *py_obj1.Get());
```
The user need to fill python module name and function name, then pass arguments into the remote.
The user also need to assign the return type and arguments types of the python function, it used to do static safe checking and get result.
## call python actor task
```python
#test_cross_language_invocation.py
@ray.remote
class Counter(object):
def __init__(self, value):
self.value = int(value)
def increase(self, delta):
self.value += int(delta)
return str(self.value)
```
c++ api call python actor function
```c++
// Create python actor
auto py_actor_handle =
ray::Actor(ray::PyActorClass{/*module_name=*/"test_cross_language_invocation", /*class_name=*/"Counter"})
.Remote(1);
EXPECT_TRUE(!py_actor_handle.ID().empty());
// Call python actor task
auto py_actor_ret =
py_actor_handle.Task(ray::PyActorMethod</*ReturnType*/std::string>{/*actor_function_name=*/"increase"}).Remote(1);
EXPECT_EQ("2", *py_actor_ret.Get());
```
The user need to fill python module name and class name when creating python actor.
PyActorMethod only need to fill the function name.
It's also similar with calling c++ actor task, also has compile-time safe checking.
GCS pubsub uses long polling, so the subscriber waits instead of returning None from polling when there is no buffered log. It needs a different heuristic to decide if the driver is not keeping up with logs from the worker.
This PR add four tests for many tasks:
many short tasks send from the single node
many short tasks send from multiple nodes
many long tasks send from multiple nodes
many long tasks send from the single node
TODO: migrate many nodes actor tests to this one.
scheduling envelop should contain:
(tasks): scheduling_test_many_xx_tasks_yy_nodes
(actors):many_nodes_actor_test (to be combined with this one)
(shuffle): pipelined_ingestion_1500_gb_15_windows
(shuffle): dask_on_ray_1tb_sort
Making some minor fixes.
1. Update input `batch_size` to be global batch size. Introduce `worker_batch_size` so each iteration trains same global batch size.
2. Update dataset `size` calculation to only refer to the fraction of the data that is trained on each worker. This allows calculations (e.g. training progress, accuracy) to be correct.
3. Add `model.train()` for generality.
4. Remove `smoke-test` flag since it's not really being used.
- Tolerate GRPC deadline exceeded and transient failures in Python GCS AIO subscribers, which becomes consistent with Python GCS synchronous subscribers.
- Tolerate any exception in dashboard for subscribing to logs and error info, which becomes consistent with how dashboard handles GRPC errors for obtaining node stats.
This PR is a pre-work before actually fixing a thread-safety bug within shutdown.
It is doing
- Add better logging upon core worker shutdown.
- Improve document around core worker shutdown.
- Remove unnecessary pointer usage from periodical runner for clean destruction order.
- Remove unnecessary `WaitForShutdown` API and combine them into a single `Shutdown` API.
On machines without GPUs, this can run subprocesses that spew to
stderr. Then with log_to_driver=True, we get log spew from every
single raylet. To avoid this, disable the GPU usage check on
certain errors.
Resolves#14305
Co-authored-by: hauntsaninja <>
When a node is dead, reference table should remove locations for those objects on the node. Otherwise locality-aware scheduling will schedule tasks to the dead node.
This is a minimum viable product for Ray Autoscaler integration with Kuberay. It is not ready for prime time/general use, but should be enough for interested parties to get started (see the documentation in kuberay.md).
I updated this version compatibility table on the release branch but didn't update it on master. This is my mistake, the process is to make a PR to master and then cherry pick that commit to the release branch.
GCS, when running as an individual component, can cause other components to fail in case of crashes.
Here are two main cases covered in this patch:
1. monitor.py will raise an exception when disconnected from GCS.
2. When GCS becomes available later than other components, the missing KV of GCS address can cause other components to fail to start.
In our patch, we fixed these two issues as well as increased the timeout for redis connection which was too small.
Co-authored-by: Mingwei Tian <mwtian@anyscale.com>