This is a follow-up PRs of https://github.com/ray-project/ray/pull/24813 and https://github.com/ray-project/ray/pull/24628
Unlike the change in cpp layer, where the resubscription is done by GCS broadcast a request to raylet/core_worker and the client-side do the resubscription, in the python layer, we detect the failure in the client-side.
In case of a failure, the protocol is:
1. call subscribe
2. if timeout when doing resubscribe, throw an exception and this will crash the system. This is ok because when GCS has been down for a time longer than expected, we expect the ray cluster to be down.
3. continue to poll once subscribe ok.
However, there is an extreme case where things might be broken: the client might miss detecting a failure.
This could happen if the long-polling has been returned and the python layer is doing its own work. And before it sends another long-polling, GCS restarts and recovered.
Here we are not going to take care of this case because:
1. usually GCS is going to take several seconds to be up and the python layer's work is simply pushing data into a queue (sync version). For the async version, it's only used in Dashboard which is not a critical component.
2. pubsub in python layer is not doing critical work: it handles logs/errors for ray job;
3. for the dashboard, it can just restart to fix the issue.
A known issue here is that we might miss logs in case of GCS failure due to the following reasons:
- py's pubsub is only doing best effort publishing. If it failed too many times, it'll skip publishing the message (lose messages from producer side)
- if message is pushed to GCS, but the worker hasn't done resubscription yet, the pushed message will be lost (lose messages from consumer side)
We think it's reasonable and valid behavior given that the logs are not defined to be a critical component and we'd like to simplify the design of pubsub in GCS.
Another things is `run_functions_on_all_workers`. We'll plan to stop using it within ray core and deprecate it in the longer term. But it won't cause a problem for the current cases because:
1. It's only set in driver and we don't support creating a new driver when GCS is down.
2. When GCS is down, we don't support starting new ray workers.
And `run_functions_on_all_workers` is only used when we initialize driver/workers.
Packages are uploaded to the GCS for `runtime_env`. These packages are garbage collected when their refcount becomes zero.
The problem is the reference doesn't get incremented until the job starts, which happens after the package is uploaded. It's possible for the package's refcount to go to zero in between the upload and when the job starts, causing the package to be deleted before it's needed by the job. It's likely the cause of https://github.com/ray-project/ray/issues/23423.
We can't just increment the refcount at the time of upload, because if the script is killed before the job is started (e.g. via Ctrl-C) then the reference will never be decremented and the package will never be deleted.
The solution in this PR is to increment the refcount at the time of upload, but automatically decrement after a configurable timeout (default 30s). This should be enough time for the job to start. When the job starts, it increments the refcount as usual and decrements it when the job finishes or is killed.
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Looking at past failures of dataset_shuffle_push_based_random_shuffle_1tb and when running it on my own, I noticed that raylets are killed because GCS was not able to respond to it in time. It seems at the beginning of the run, there is a huge CPU spike which starved GCS out of CPU. With the same spirit of adjusting workers to higher OOM scores, we can give workers higher niceness so they yield CPU to GCS, Raylet and other user processes.
I ran dataset_shuffle_push_based_random_shuffle_1tb a few time which no longer sees raylet death because of GCS CPU starvation. But there are other issues making the test fail which I will continue to investigate.
Builds are currently failing because `mirror.bazel.build`'s SSL certificate expired. This PR adds another bazel mirror to avoid this problem.
Builds are still failing because https://github.com/jupp0r/prometheus-cpp explicitly lists `mirror.bazel.build`.
commit 40774ac219
Author: Qing Wang <kingchin1218@gmail.com>
Date: Tue May 17 11:33:59 2022 +0800
Minor changes for Java runtime env. (#24840)
Introduced an extra log message that spams stdout when running with Tune. Move this log line to debug and add an e2e test check.
When assigning an owner for an object (different from the current worker), such as:
```python
ray.put(vaule, _owner = ACTORHANDLE)
```
Object Manager holds the wrong owner's address and updates location info to the wrong worker, making `ray.get` slow. the current master will get Timeout in this new test case.
It fixes the mysterious error when all cluster env build is failing when pip uninstall / pip install is written in 2 lines. The root cause will be fixed later
Why are these changes needed?
Add API stability annotations for datasource classes, and add a linter to check all data classes have appropriate annotations.
Compared with pushing based model, long-polling is slower because to send a message, you need to wait until you receive the polling requests. This PR improves this by sending X polling requests so that at most there can be 10 requests flying in the middle and this can improve the perf. Tested with `many_nodes_actor_tests` and no regression:
```
Actor launch time: 1.6540390349998688 (5000 actors)
Actor ready time: 13.953653221999957 (5000 actors)
Total time: 15.607692256999826 (5000 actors)
```
1. [X] Deprecate old code path for publish node resource change
2. [X] Move poller and broadcaster into RaySyncer
3. [X] Deprecate the old pg reporting code path
4. [X] Remove syncer from gcs cluster resource manager and encapsulate everything in syncer module.
5. [X] Versioning the report
6. [ ] Introduce Reporter/Receiver API in the prototype and adaptor rayler and gcs with that.
7. [ ] Experiment with protocol & communication layer change.
This test is not running well in Redis mode. Given that the other tests are ok, I'd like to only disable this one instead of revert the whole commit to making sure the other tests don't have regression.
`linux://python/ray/tests:test_runtime_env_plugin::test_plugin_timeout`
Now the status of subscribing to Actors in Actor Manager is eager mode, that is to say, when worker A passes List<ActorHandler> as an input parameter to another worker B, worker B will immediately subscribe to the status of all Actors in this list when constructing, even if worker B has not yet used these actors.
Assuming that a graph job has 1000 actors, and each actor has a List of the graph, then this job has nearly 100w subscription relationships. When the job goes offline, the 1000 actor processes will be killed, the redis-server will instantly receive the disconnect event from the 1000 actor processes, each event will trigger 1000 unsubscribexxx operations in the freeClient, causing the redis-server to get stuck.
We suggest to change this eager mode to lazy mode, and only initiate subscription when `SubmitActorTask`, which can reduce many unnecessary subscription relationships.
The microbenchmark (Left is this PR, Right is master branch)

in python3.10, it fixed a bug that a interactively defined class was tagged with a wrong type during inspection; which now throws OSError. detailed pr python/cpython#27171
we need to handle this case properly in otherwise ray actor definition will throw in interactive mode. please refer to #25026 for repo.
This PR tries to fix some features of the gcs actor scheduler, which include:
1. Report pending actor info in gcs such that `HandleGetAllResourceUsage` is able to export the whole thing (including worker nodes and gcs). Otherwise, external features, i.e., autoscaler, can not work properly.
2. In `ClusterResourceScheduler`, actors that can not find available nodes (by gcs scheduler) should stay in the pending queue of `ClusterTaskManager`.
3. If using gcs scheduler, the PG's wildcard resources have to be updated **incrementally** when committing bundles.
In the next PR, we will fix all remaining trivia issues and enable gcs scheduler to pass the entire testing pipeline.
Co-authored-by: Chong-Li <lc300133@antgroup.com>
Adds file locking to prevent parallel file system operations to Tune/AIR syncing functions.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Weights and biases creates a wandb directory to collect intermediate logs and artifacts before uploading them. This directory should be in the respective trial directories. This also means we can re-enable auto resuming.
When `create_scheduler("pb2", ....)` is run a `TuneError` exception is raised. See referenced issue below for details.
In addition to the fix, introduced a new test (`ray/tune/tests/test_api.py::ShimCreationTest.testCreateAllSchedulers`) to confirm that `tune.create_scheduler()` will work with all documented schedulers.
Note: `tesCreateAllTestSchedulers` is a superset of what is covered in `testCreateScheduer`. It may be reasonable to retire the later test.