PR #19014 introduced the idea of a StartupToken to uniquely identify a worker via a counter. This PR:
- returns the Process and the StartupToken from StartWorkerProcess (previously only Process was returned)
- Change the starting_workers_to_tasks map to index via the StartupToken, which seems to fix the windows failures.
- Unskip the windows tests in test_basic_2.py
It seems once a fix to PR #18167 goes in, the starting_workers_to_tasks map will be removed, which should remove the need for the changes to StartWorkerProcess made in this PR.
Fix Remote code injection in Log4j
Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.
Check this refer: [CVE-2021-44228](https://github.com/advisories/GHSA-jfh8-c2jp-5v3q)
Currently if local lease request fails due to raylet death, direct_task_transport.cc will retry forever for driver.
With this PR, we treat grpc unavailable as non-retryable error (the assumption is that local grpc is always reliable and grpc unavailable error indicates that server is dead) and will just fail the task.
Note: this PR doesn't try to address a bigger problem: don't crash driver when local raylet dies. We have multiple places in the code that assumes the local raylet never fail and have CHECK_STATUS_OK for that. All these places need to be changed so we can properly propagate failures to the user.
`//python/ray/tests:test_client_reconnect` seems to only flake under GCS HA build. The client server starts to shutdown under injected failures, unlike the behavior without GCS KV or pubsub.
`//python/ray/tests:test_multi_node_3` seems to flake more often under GCS HA build, although it is still flaky without GCS HA feature flags. It seems raylet termination did not notify other processes properly.
Disable these two tests before they are fixed.
We erase the elements from object_id_refs_ in the method `RemoveLocalReferenceInternal()` which may cause iterator invalidation issue.
Note that, normally flatmap will not trigger any iterator invalidation except triggering `rehash()`. But in this case, we may remove other elements(not only the current iterator), so there is still a risk of it.
A worker can crash right after putting its return values into the object store. Then, the owner will receive the worker crashed error, but the return objects will still be in the remote object store. Later, if the task is retried, the worker will crash on [this line](https://github.com/ray-project/ray/blob/master/src/ray/core_worker/transport/direct_actor_transport.cc#L105) because the object already exists.
Another way this can happen is if a task has multiple return values, and one of those return values is transferred to another node. If the task is later re-executed on that node, the task will fail because of the same error.
This PR fixes the crash so that:
1. If an object already exists, we try to pin that copy. Ideally, we should destroy the old copy and create the new one to make sure that metadata like the owner address is in sync, but this is pretty complicated to do right now.
2. If the pinning fails, we store an OBJECT_LOST error to throw to the application.
3. On the raylet, we check whether we already have the object pinned, and only subscribe to the owner's eviction message if the object is not pinned.
4. Also fixes bugs in the analogous case for `ray.put` (previously this would hang, now the application will receive an error if a `ray.put` object already exists).
Dashboard contains resource reporter and actor subscribers. Dashboard agent has resource report publisher. So GCS pubsub needs to support these channel types.
Also refactor GCS AIO subscribers to have each subscriber per channel. This matches the API of GCS sync subscribers, and make subscribing with multiple channels easier.
This PR adds four staging nightly tests for gcs :
- many_actors
- many_tasks
- many_pgs
- many_nodes
These are benchmark tests that are highly related to gcs ha.
To make it easier to add tests, this PR also change e2e.py a little bit to include testing flags to app config.
This is part of redis removal. In this PR, if `RAY_gcs_storage=memory`, it'll use memory table instead of redis table.
The config setup has to be moved into GcsServer because with the memory table it's transistent.
Signed-off-by: Jules S.Damji jules@anyscale.com
Why are these changes needed?
The code snippet referenced a python function that was not defined, therefore the code snippet as is won't work. All complete or self-contained code in our docs should run.
The changes made were adding the undefined function, iterating over a list of different random large arrays to show the difference between local or distributed sort's execution time, and print them.
Closes#20960
Bazel has a rule for enforcing the version so we can just reuse that.
This redundant bazel version check logic in setup.py is also causing issue when building conda package, because conda has its own version of bazel and it doesn't support `--version`.
For actor channel, GCS clients subscribe to a single actor but dashboard subscribes to all actors. This change makes supporting this possible.
Most of the added code is in `integration_test.cc`, which tests the publisher and subscriber together.
Also, add the basic support for dashboard reporter pubsub.
Share the same code for RecordMetrics & DebugString for cluster task manager.
Both requires almost identical (and also expensive) operation. This PR makes them share the same `UpdateState` code which stores stats in the struct.
Note that we don't update state when metrics are recorded because the debug string is anyway consistently called and states are updated.
Ideally, we should dynamically update the stats.
This PR adds RecordMetrics and DebugString to all raylet components.
Some of methods are probably empty now. They are going to be supported in the next PR
Before this PR, then raylet notices there is something wrong with the worker starting, it'll start a new worker but not kill the old one. If the old one is hanging, it'll lead to resource waste.
This PR killed the failed worker if it's still alive and also print useful logs
Recently I am testing some benchmark about worker registering with running worker in container. Current the Ray core has `process_startup_time_ms` metrics which is about process fork time.
This PR try to add metrics about the duration of worker registering.
After moving internal kv to grpc, there is a regression in actor launching performance. This PR move the work from main thread to a dedicated thread for internal kv to mitigate it.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
This adds an initial Dataset.stats() framework for debugging dataset performance. At a high level, execution stats for tasks (e.g., CPU time) are attached to block metadata objects. Datasets have stats objects that hold references to these stats and parent dataset stats (this avoids stats holding references to parent datasets, allowing them to be gc'ed). Similarly, DatasetPipelines hold stats from recently computed datasets.
Currently only basic ops like map / map_batches are instrumented. TODO placeholders are left for future PRs.
This PR centralizes all existing metrics to `metric_defs.h`.
Previously, each file relies on implicit import of metric_def.h within the stats module. After this PR we only precisely import `metric_defs.h` for each file.
This PR adds an ability to set a random seed/numpy random generator in BasicVariantGenerator, allowing for reproducibility across separate runs. All the changes are fully backwards compatible.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
This is part work of redis removal. In this PR we introduced a new mode for internal kv, memory mode.
There are two ways to address this:
- Update store client and use store client in internal kv
- Add memory table into internal kv directly.
The former one actually is a better choice since it put everything related to storage into a lowerlevel. But it's pretty hard to do this now, since internal kv use hset/hget and redis store client use set/get, so the data will not be compatible and it'll be a brake change.
So the easier way to do this is 2) and it's what this PR doing.
Next: use the flag for store client
This PR adds per task/actor scheduling strategy and currently the only strategy are PlacementGroupSchedulingStrategy and DefaultSchedulingStrategy.
Going forward, people should use `scheduling_strategy=PlacementGroupSchedulingStrategy` to define placement group for actor/task. The old way will be deprecated.
Here we met a crash in line 446's RAY_CHECK
d26c9e67e8/src/ray/object_manager/ownership_based_object_directory.cc (L441-L450)
And we found out that it's because we didn't set the node_id for dead nodes. If there are dead nodes and we are trying to LookupRemoteConnectionInfo in it. This crash will happen.
This PR fixes this crash.
There is a race condition between `DestoryActor` (due to handle out of scope) and `CreateActor`. If `DestoryActor` happens first, then `CreateActor` will fail complaining about not being able to find the registered actor.
As described in #18884, reconfiguration will mutate state mid-query. I try to solve this problem by adding read/write lock to each replica.
Co-authored-by: yuzihao.2001 <yuzihao.2001@bytedance.com>