We shouldn't ray.get() all the blocks immediately during the to_pandas call, it's better to do it one by one. That's a little slower but to_pandas() isn't expected to be fast anyways.
This PR addresses two issues an issue with the Session.get_next docstring:
It's unclear whether the tense should be imperative or non-imperative. The Ray documentation states that we use Google style (which is non-imperative), but we are formatting using PEP8 (which is imperative). Moreover, we use both imperative and non-imperative summaries across the Ray Train code. I've stuck with a non-imperative summary for consistency with the rest of the Session class.
The docstring doesn't describe the conditions under which the function returns None.
It's useful when measuring autoscaler performance to know how long the autoscaler takes for update iterations.
This PR adds a prometheus metric for that.
The bucket ticks for the histogram are arranged in powers of ten:
[.001, .01, .1, 1, 10, 100, 1000]. Depending on the situation, we've seen the update time range from .1 second to a few minutes.
Separate the CoreWorkerProcess static functions from CoreWorkerProcess state; Currently the static and non-static state are mixed together, and more importantly the static state is not thread safe. By separating them and create helper class for non-static state CoreWorkerProcessImpl, we can make it thread safe.
in follow up PR we will make CoreWorkerProcess state thread safe.
This PR depends on #19677, The follow up PR is #19679
we fixed groupby issue in cuj2; sync the change into nightly test. this test doesn't need to use gpu at all. it returns soon after data ingestion finishes.
Quotation marks were needed in Anyscale app configs to avoid install errors when # were used e.g. in URLs.
Since this has been fixed on the Anyscale side, we can get rid of these.
So I have a AMD machine with many cores and 32GB of memory. When I do `pip install -e .`, my machine crashes since bazel tries to use all the cores, but quickly runs out of memory. It seems there is no native way to set environment variables to tell bazel to limit its resource consumption, but there is a `--local_cpu_resources` command-line option.
This PR exposes that to the `pip install` via an environment variable. I also went through the setup.py and documented all the environment variables I could find.
Object metadata are fully managed by workers now, so the related protos and logic in GCS are obsolete. Most of the logic has been removed in https://github.com/ray-project/ray/pull/19963. This PR removes some remaining obsolete protos.
Adds a set_max_concurrency method to the Searcher API. This method allows for the ConcurrencyLimiter to override the max_concurrency value on searchers with custom internal logic for limiting concurrency (atm. SigOpt and HEBO). This PR also changes the initialisation of SigOpt and HEBO optimisers to happen on set_search_properties instead of in the constructor, so that the new max_concurrency is respected.
Furthermore, this PR breaks up test_tune_restore.py into test_tune_restore_warm_start.py and test_tune_restore.py to deal with a timeout, and ensures that the automatic application of ConcurrencyLimiter in tune.run doesn't override a user-defined ConcurrencyLimiter.
This will allow us to pass protobuf-defined metadata to the error object. It will allow us to propagate meaningful metadata (e.g., function names for ObjectLostError, ip address for ObjectLostError within raylet, or many useful metadata for ActorDiedError).
### Impl
We will allow the error object to include "payload". The payload will be the protobuf message that includes metadata.
```
# Prev
ACTOR_DIED (metadata) | (empty)
# New
ACTOR_DIED (metadata) | Serialized protobuf message (body)
```
Note that currently, the body is
serialized message pack that contains serialized protobuf. This needs to be cleaned up in the future.
Adds a working failure test for streaming and non-streaming shuffle, without lineage reconstruction. This does a few things.
Test improvements:
- modifies AutoscalingCluster to allow passing an idle node timeout (the default is very low)
- some small improvements to the NodeKiller actor to hopefully improve flakiness.
Shuffle fixes:
- modifies shuffle tracker to wait on futures instead of having tasks signal. During failures, tasks may never signal the tracker, so we can't rely on these to track progress.
Core fixes:
- raylet will exit immediately if it receives the Shutdown RPC with graceful=False - there was a bug here where it's supposed to exit after replying to the client, but the gRPC server goes down for an unknown reason and the client reply is never sent
- On reference deletion, the owner now publishes an additional message to subscribers that the object has been deleted. Previously, this was causing a hang in streaming shuffle because the raylets pulling an object subscribed after the object was already deleted, so they never received the error signal.