This will allow us to pass protobuf-defined metadata to the error object. It will allow us to propagate meaningful metadata (e.g., function names for ObjectLostError, ip address for ObjectLostError within raylet, or many useful metadata for ActorDiedError).
### Impl
We will allow the error object to include "payload". The payload will be the protobuf message that includes metadata.
```
# Prev
ACTOR_DIED (metadata) | (empty)
# New
ACTOR_DIED (metadata) | Serialized protobuf message (body)
```
Note that currently, the body is
serialized message pack that contains serialized protobuf. This needs to be cleaned up in the future.
Adds a working failure test for streaming and non-streaming shuffle, without lineage reconstruction. This does a few things.
Test improvements:
- modifies AutoscalingCluster to allow passing an idle node timeout (the default is very low)
- some small improvements to the NodeKiller actor to hopefully improve flakiness.
Shuffle fixes:
- modifies shuffle tracker to wait on futures instead of having tasks signal. During failures, tasks may never signal the tracker, so we can't rely on these to track progress.
Core fixes:
- raylet will exit immediately if it receives the Shutdown RPC with graceful=False - there was a bug here where it's supposed to exit after replying to the client, but the gRPC server goes down for an unknown reason and the client reply is never sent
- On reference deletion, the owner now publishes an additional message to subscribers that the object has been deleted. Previously, this was causing a hang in streaming shuffle because the raylets pulling an object subscribed after the object was already deleted, so they never received the error signal.
In `sample_boundaries`, naive concatenation with `np.concatenate()` doesn't work when the single-column sample blocks have varying lengths (e.g., when the original dataset had non-uniform blocks). This PR fixes this by delegating concatenation and NumPy array conversion to the block builder and block accessor, respectively.
Using Ray pubsub for publishing and subscribing logs via GCS, from Python worker, log importer, dashboard and unit tests.
This change is guarded behind the RAY_gcs_grpc_based_pubsub feature flag.
Datasets docs for last-mile preprocessing, particularly geared towards ML ingest. This gives groupby, aggregations, and random shuffling examples in the overview page (not present previously), adds some concreteness to our last-mile preprocessing positioning, and provides some preprocessing recipes for a few common transformations.
This PR does two things:
merge latest groupby based filtering to CUJ2
add a debug mode so we only run dummy trainer for measure data processing performance.
This is a minor update to our release sanity check script so that it runs out of the box on M1. Since M1s only support python 3.8 and 3.9, we shouldn't try to install python 3.6 or 3.7.
This PR adds support for publishing and subscribing to logs in Python via GCS pubsub. It also refactors the Python threaded subscriber to support subscribing and calling `close()` from multiple threads.
We can also move tests and logging support to another PR, but it will make the purpose of the refactoring seems less obvious.
Arbitrary API access is pretty rampant at the moment. It is pretty hard to correct it in one go. This is a necessary incremental step towards a cleaner API.
Why are these changes needed?
Replace the existing temp file to avoid the issue that the previous worker dies and leaves the temp file there, resulting in the next coming workers are not able to write a new temp file since there is an existing one.
Moving debug_state.txt to the log directory. This will help us finding debug_state.txt from the dashboard. See below.
Add debug_state_gcs.txt. This will display GCS' debug state. GCS will also dump debug state to the file every 10 seconds
For periodic printing of debug state, I made it happen every 1 minute. This is because every 10 seconds usually is very spammy.
## Why are these changes needed?
ThreadPoolManager and FiberStateManager have the same functionality and logic. This PR aims to remove the duplicate implementations of them.
Add a ConcurrencyGroupExecutor class to do that logic. `ConcurrencyGroupExecutor<FiberState>` is used as FiberStateManager, `ConcurrencyGroupExecutor<BoundedExecutor>` is used as ThreadPoolManager.
This reverts commit f13c2a5350.
Re-land remove PG caching logic.
As a result, pbt scheduler cannot stop and start trial within itself for weight transfer and perturbation now. So these are some changes to pbt scheduler:
1. the trial being perturbed is always left in a PAUSED state upon exiting on_trial_result. This is because instead of maintaining two separate paths for replacing a trial, we consolidate to always "stop" and "restore" and rely on reuse_actor as an optimization if available. (see 2)
2. consolidates pbt replacing a trial with reuse_actor.
3. introduces a NOOP scheduler decision to indicate that (pbt) scheduler has finished its interaction with executor and thus no decision is further needed in Tune loop.
Long term, we should control the interface between scheduler and executor. For example, on_trial_result taking in the whole runner is too much API exposure that we want to remove.
- Removing scale_to logic from object store. We don't need to scale during tests, which will disambiguate infra failures vs app failures.
- Run microbenchmark in core nightly, meaning it will run even more often
- Run weekly scalability tests daily instead. (They are not too expensive).
- Run some core daily tests separately to avoid infra failures.
## Why are these changes needed?
When the Java multi-worker feature is on and if workers respond `Exit` requests from the worker pool with delays (even slower than the interval of `TryKillingIdleWorkers`), the worker pool may send additional `Exit` requests to workers before receiving replies of previous ones. This leads to a `RAY_CHECK` failure from here
60df705b4e/src/ray/raylet/worker_pool.cc (L984)
due to executing two reply callbacks in a row.
This PR fixes the bug by ensuring the worker pool only sends new `Exit` requests to a worker if there are no inflight `Exit` requests to any worker of the worker process.
This PR includes the precise reason why actor is dead to `ActorTable`. The `death cause` stored in the table will be propagated to core worker through pubsub, so that core worker can eventually raise a good error message with metadata.
Why are these changes needed?
If max concurrency is 1 in default group, a blocking task executing in default group will block the following tasks in different group. See reproduction script in #20475
The issue is due to tasks executing in the default concurrent group run in the main task execution thread, and tasks in other concurrent groups will be blocked if the main task execution thread is blocked.
This PR only changes concurrent actor behavior that default group will not block other groups.
Related issue number
Fix#20475
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
In this PR, instead of passing specific "creation_task_exception", we pass RayErrorInfo. This will allow us to pass any type of error metadata to MarkTaskReturnObjectFailed.
This PR is basically refactoring.
## Related issue number
https://github.com/ray-project/ray/issues/20534
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
This PR is mostly for implementing "fixture" for nightly test. Note that the current fixture implementation is not that great, and we can probably improve this in the future after refactoring e2e.py.
* [job submission] Use specific redis_address and redis_password instead of "auto" (#20687)
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>