This PR adds four staging nightly tests for gcs :
- many_actors
- many_tasks
- many_pgs
- many_nodes
These are benchmark tests that are highly related to gcs ha.
To make it easier to add tests, this PR also change e2e.py a little bit to include testing flags to app config.
This is part of redis removal. In this PR, if `RAY_gcs_storage=memory`, it'll use memory table instead of redis table.
The config setup has to be moved into GcsServer because with the memory table it's transistent.
Signed-off-by: Jules S.Damji jules@anyscale.com
Why are these changes needed?
The code snippet referenced a python function that was not defined, therefore the code snippet as is won't work. All complete or self-contained code in our docs should run.
The changes made were adding the undefined function, iterating over a list of different random large arrays to show the difference between local or distributed sort's execution time, and print them.
Closes#20960
Bazel has a rule for enforcing the version so we can just reuse that.
This redundant bazel version check logic in setup.py is also causing issue when building conda package, because conda has its own version of bazel and it doesn't support `--version`.
For actor channel, GCS clients subscribe to a single actor but dashboard subscribes to all actors. This change makes supporting this possible.
Most of the added code is in `integration_test.cc`, which tests the publisher and subscriber together.
Also, add the basic support for dashboard reporter pubsub.
Share the same code for RecordMetrics & DebugString for cluster task manager.
Both requires almost identical (and also expensive) operation. This PR makes them share the same `UpdateState` code which stores stats in the struct.
Note that we don't update state when metrics are recorded because the debug string is anyway consistently called and states are updated.
Ideally, we should dynamically update the stats.
This PR adds RecordMetrics and DebugString to all raylet components.
Some of methods are probably empty now. They are going to be supported in the next PR
Before this PR, then raylet notices there is something wrong with the worker starting, it'll start a new worker but not kill the old one. If the old one is hanging, it'll lead to resource waste.
This PR killed the failed worker if it's still alive and also print useful logs
Recently I am testing some benchmark about worker registering with running worker in container. Current the Ray core has `process_startup_time_ms` metrics which is about process fork time.
This PR try to add metrics about the duration of worker registering.
After moving internal kv to grpc, there is a regression in actor launching performance. This PR move the work from main thread to a dedicated thread for internal kv to mitigate it.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
This adds an initial Dataset.stats() framework for debugging dataset performance. At a high level, execution stats for tasks (e.g., CPU time) are attached to block metadata objects. Datasets have stats objects that hold references to these stats and parent dataset stats (this avoids stats holding references to parent datasets, allowing them to be gc'ed). Similarly, DatasetPipelines hold stats from recently computed datasets.
Currently only basic ops like map / map_batches are instrumented. TODO placeholders are left for future PRs.
This PR centralizes all existing metrics to `metric_defs.h`.
Previously, each file relies on implicit import of metric_def.h within the stats module. After this PR we only precisely import `metric_defs.h` for each file.
This PR adds an ability to set a random seed/numpy random generator in BasicVariantGenerator, allowing for reproducibility across separate runs. All the changes are fully backwards compatible.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
This is part work of redis removal. In this PR we introduced a new mode for internal kv, memory mode.
There are two ways to address this:
- Update store client and use store client in internal kv
- Add memory table into internal kv directly.
The former one actually is a better choice since it put everything related to storage into a lowerlevel. But it's pretty hard to do this now, since internal kv use hset/hget and redis store client use set/get, so the data will not be compatible and it'll be a brake change.
So the easier way to do this is 2) and it's what this PR doing.
Next: use the flag for store client
This PR adds per task/actor scheduling strategy and currently the only strategy are PlacementGroupSchedulingStrategy and DefaultSchedulingStrategy.
Going forward, people should use `scheduling_strategy=PlacementGroupSchedulingStrategy` to define placement group for actor/task. The old way will be deprecated.
Here we met a crash in line 446's RAY_CHECK
d26c9e67e8/src/ray/object_manager/ownership_based_object_directory.cc (L441-L450)
And we found out that it's because we didn't set the node_id for dead nodes. If there are dead nodes and we are trying to LookupRemoteConnectionInfo in it. This crash will happen.
This PR fixes this crash.
There is a race condition between `DestoryActor` (due to handle out of scope) and `CreateActor`. If `DestoryActor` happens first, then `CreateActor` will fail complaining about not being able to find the registered actor.
As described in #18884, reconfiguration will mutate state mid-query. I try to solve this problem by adding read/write lock to each replica.
Co-authored-by: yuzihao.2001 <yuzihao.2001@bytedance.com>
We can just pickle task options instead of json so that we don't need to write custom `to_dict` and `from_dict` methods for complex python option objects (e.g. PlacementGroup).
Partly addresses #20774 by registering node launcher failures in driver logs, via the event summarizer.
This way, users can tell that the launch failed from the driver logs.
Also pushes the node creation exception to driver logs, but only once per 60 minutes.
Right now in ray, a lot of edge cases related to grpc are not tested. This PR is just a simple try to give the developer some way to delay grpc request. It could be used with manual testing and also e2e test since it's supporting delay for specific grpc method.
To use this feature, just simple set os env `RAY_TESTING_ASIO_DELAY_US="method1=10:20,method2=20:30,*=200:200"`
This means, for `method1` it'll delay 10-20us, for method2 it'll delay 20-30us. For all the rest, it'll delay 200us.
Arrow byte-packs boolean arrays, with 8 entries per byte, and both Arrow and NumPy utilize bit-based offsets for indexing into data buffers. This PR adds support for properly indexing into boolean tensor columns by using bit-based offsets for such columns.
Now the Java LongPollClient's is not singleton, and a new polling thread will be created within a new LongPollClient for per RayServeHandle. It will degrade the performance of Replica Actor. So we change the LongPollClient's polling thread to singleton.
We shouldn't ray.get() all the blocks immediately during the to_pandas call, it's better to do it one by one. That's a little slower but to_pandas() isn't expected to be fast anyways.
This PR addresses two issues an issue with the Session.get_next docstring:
It's unclear whether the tense should be imperative or non-imperative. The Ray documentation states that we use Google style (which is non-imperative), but we are formatting using PEP8 (which is imperative). Moreover, we use both imperative and non-imperative summaries across the Ray Train code. I've stuck with a non-imperative summary for consistency with the rest of the Session class.
The docstring doesn't describe the conditions under which the function returns None.
It's useful when measuring autoscaler performance to know how long the autoscaler takes for update iterations.
This PR adds a prometheus metric for that.
The bucket ticks for the histogram are arranged in powers of ten:
[.001, .01, .1, 1, 10, 100, 1000]. Depending on the situation, we've seen the update time range from .1 second to a few minutes.
Separate the CoreWorkerProcess static functions from CoreWorkerProcess state; Currently the static and non-static state are mixed together, and more importantly the static state is not thread safe. By separating them and create helper class for non-static state CoreWorkerProcessImpl, we can make it thread safe.
in follow up PR we will make CoreWorkerProcess state thread safe.
This PR depends on #19677, The follow up PR is #19679
we fixed groupby issue in cuj2; sync the change into nightly test. this test doesn't need to use gpu at all. it returns soon after data ingestion finishes.