This PR centralizes all existing metrics to `metric_defs.h`.
Previously, each file relies on implicit import of metric_def.h within the stats module. After this PR we only precisely import `metric_defs.h` for each file.
This PR adds an ability to set a random seed/numpy random generator in BasicVariantGenerator, allowing for reproducibility across separate runs. All the changes are fully backwards compatible.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
This PR adds per task/actor scheduling strategy and currently the only strategy are PlacementGroupSchedulingStrategy and DefaultSchedulingStrategy.
Going forward, people should use `scheduling_strategy=PlacementGroupSchedulingStrategy` to define placement group for actor/task. The old way will be deprecated.
There is a race condition between `DestoryActor` (due to handle out of scope) and `CreateActor`. If `DestoryActor` happens first, then `CreateActor` will fail complaining about not being able to find the registered actor.
As described in #18884, reconfiguration will mutate state mid-query. I try to solve this problem by adding read/write lock to each replica.
Co-authored-by: yuzihao.2001 <yuzihao.2001@bytedance.com>
We can just pickle task options instead of json so that we don't need to write custom `to_dict` and `from_dict` methods for complex python option objects (e.g. PlacementGroup).
Partly addresses #20774 by registering node launcher failures in driver logs, via the event summarizer.
This way, users can tell that the launch failed from the driver logs.
Also pushes the node creation exception to driver logs, but only once per 60 minutes.
Right now in ray, a lot of edge cases related to grpc are not tested. This PR is just a simple try to give the developer some way to delay grpc request. It could be used with manual testing and also e2e test since it's supporting delay for specific grpc method.
To use this feature, just simple set os env `RAY_TESTING_ASIO_DELAY_US="method1=10:20,method2=20:30,*=200:200"`
This means, for `method1` it'll delay 10-20us, for method2 it'll delay 20-30us. For all the rest, it'll delay 200us.
Arrow byte-packs boolean arrays, with 8 entries per byte, and both Arrow and NumPy utilize bit-based offsets for indexing into data buffers. This PR adds support for properly indexing into boolean tensor columns by using bit-based offsets for such columns.
We shouldn't ray.get() all the blocks immediately during the to_pandas call, it's better to do it one by one. That's a little slower but to_pandas() isn't expected to be fast anyways.
This PR addresses two issues an issue with the Session.get_next docstring:
It's unclear whether the tense should be imperative or non-imperative. The Ray documentation states that we use Google style (which is non-imperative), but we are formatting using PEP8 (which is imperative). Moreover, we use both imperative and non-imperative summaries across the Ray Train code. I've stuck with a non-imperative summary for consistency with the rest of the Session class.
The docstring doesn't describe the conditions under which the function returns None.
It's useful when measuring autoscaler performance to know how long the autoscaler takes for update iterations.
This PR adds a prometheus metric for that.
The bucket ticks for the histogram are arranged in powers of ten:
[.001, .01, .1, 1, 10, 100, 1000]. Depending on the situation, we've seen the update time range from .1 second to a few minutes.
So I have a AMD machine with many cores and 32GB of memory. When I do `pip install -e .`, my machine crashes since bazel tries to use all the cores, but quickly runs out of memory. It seems there is no native way to set environment variables to tell bazel to limit its resource consumption, but there is a `--local_cpu_resources` command-line option.
This PR exposes that to the `pip install` via an environment variable. I also went through the setup.py and documented all the environment variables I could find.
Object metadata are fully managed by workers now, so the related protos and logic in GCS are obsolete. Most of the logic has been removed in https://github.com/ray-project/ray/pull/19963. This PR removes some remaining obsolete protos.
Adds a set_max_concurrency method to the Searcher API. This method allows for the ConcurrencyLimiter to override the max_concurrency value on searchers with custom internal logic for limiting concurrency (atm. SigOpt and HEBO). This PR also changes the initialisation of SigOpt and HEBO optimisers to happen on set_search_properties instead of in the constructor, so that the new max_concurrency is respected.
Furthermore, this PR breaks up test_tune_restore.py into test_tune_restore_warm_start.py and test_tune_restore.py to deal with a timeout, and ensures that the automatic application of ConcurrencyLimiter in tune.run doesn't override a user-defined ConcurrencyLimiter.
Adds a working failure test for streaming and non-streaming shuffle, without lineage reconstruction. This does a few things.
Test improvements:
- modifies AutoscalingCluster to allow passing an idle node timeout (the default is very low)
- some small improvements to the NodeKiller actor to hopefully improve flakiness.
Shuffle fixes:
- modifies shuffle tracker to wait on futures instead of having tasks signal. During failures, tasks may never signal the tracker, so we can't rely on these to track progress.
Core fixes:
- raylet will exit immediately if it receives the Shutdown RPC with graceful=False - there was a bug here where it's supposed to exit after replying to the client, but the gRPC server goes down for an unknown reason and the client reply is never sent
- On reference deletion, the owner now publishes an additional message to subscribers that the object has been deleted. Previously, this was causing a hang in streaming shuffle because the raylets pulling an object subscribed after the object was already deleted, so they never received the error signal.
In `sample_boundaries`, naive concatenation with `np.concatenate()` doesn't work when the single-column sample blocks have varying lengths (e.g., when the original dataset had non-uniform blocks). This PR fixes this by delegating concatenation and NumPy array conversion to the block builder and block accessor, respectively.
Using Ray pubsub for publishing and subscribing logs via GCS, from Python worker, log importer, dashboard and unit tests.
This change is guarded behind the RAY_gcs_grpc_based_pubsub feature flag.
This PR adds support for publishing and subscribing to logs in Python via GCS pubsub. It also refactors the Python threaded subscriber to support subscribing and calling `close()` from multiple threads.
We can also move tests and logging support to another PR, but it will make the purpose of the refactoring seems less obvious.