This adds memory monitoring to scalability envelope tests so that we can compare the peak memory usage for both nonHA & HA.
NOTE: the current way of adding memory monitor is not great, and we should implement fixture to support this better, but that's not in progress yet.
Current logs API simply returns a str to unblock development and integration. We should add proper log streaming for better UX and external job manager integration.
Co-authored-by: Sven Mika <sven@anyscale.io>
Co-authored-by: sven1977 <svenmika1977@gmail.com>
Co-authored-by: Ed Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>
Uses a direct `pip install` instead of creating a conda env to make pip installs incremental to the cluster environment.
Separates the handling of `pip` and `conda` dependencies.
The new `pip` approach still works if only the base Ray is installed on the cluster and the user specifies libraries like "ray[serve]" in the `pip` field. The mechanism is as follows:
- We don't actually want to reinstall ray via pip, since this could lead to version mismatch issues. Instead, we want to use the Ray that's already installed in the cluster.
- So if "ray" was included by the user in the pip list, remove it
- If a library "ray[serve]" or "ray[tune, rllib]" was included in the pip list, remove it and replace it by its dependencies (e.g. "uvicorn", "requests", ..)
Co-authored-by: architkulkarni <arkulkar@gmail.com>
Co-authored-by: architkulkarni <architkulkarni@users.noreply.github.com>
The `results` with lesser tasks contains un-stable wait times, so increased the number of tasks in a hope for less noisy wait times. Minor in changes in assert comparisons have also been made for lesser error.
Before this PR, GcsActorManager::CreateActor() would replace actor's namespace by
actors's owner job's namespace, even if actor is created by user with a user specified
namespace. But in named_actors_, actor is set to use user specified namespace by
GcsActorManager::RegisterActor before CreateActor() is called, So that
GcsActorManager::DestroyActor failed to find actor from named_actors_ by owner job's
namespace to remove, hence reuse actor name in same namespace failed for same name actor
not removed by GcsActorManager::DestroyActor in named_actors_.
issue #20611
Currently, when the GCS RPC failed with gRPC unavailable error because the GCS is dead, it will retry forever.
b3a9d4d87d/src/ray/rpc/gcs_server/gcs_rpc_client.h (L57)
And it takes about 10 minutes to detect the GCS server failure, meaning if GCS is dead, users will notice in 10 minutes.
This can easily cause confusion that the cluster is hanging (since users are not that patient). Also, since GCS is not fault tolerant in OSS now, 10 minutes are too long timeout to detect GCS death.
This PR changes the value to 60 seconds, which I believe is much more reasonable (since this is the same value as our blocking RPC call timeout).
This fixes the bug where empty line is printed to the driver when multi threads are used.
e.g.,
```
2021-12-12 23:20:06,876 INFO worker.py:853 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379
(TESTER pid=12344)
(TESTER pid=12348)
```
## How does the current log work?
After actor initialization method is done, it prints ({repr(actor)}) to the stdout and stderr, which will be read by log monitor. Log monitor now can parse the actor repr and starts publishing log messages to the driver with the repr name.
## Problem
If actor init method contains a new background thread, when we call print({repr(actor)}), it seems flush() is not happening atomically. Based on my observation, it seems to flush the repr(actor) first, and then we flush the end="\n" (the default end parameter of print function) after.
Since log monitor never closes the file descriptor, it is possible it reads the log line before the end separator "\n" is flushed. That says, this sort of scenario can happen.
Expected
- `"repr(actor)\n"` is written to the log file. (which includes the default print end separator `"\n"`).
- Log monitor reads `repr(actor)\n` and publishes it.
Edge case
- `"repr(actor)"` is written to the log
- Log monitor publishes `repr(actor)`.
- `"\n"` is written to the log (end separator).
- Log monitor publishes `"\n"`.
Note that this is only happening when we print the actor repr "after" actor initialization. I think since new thread is running in the background, GIL comes in, and it creates the gap between `flush(repr(actor))` and `flush("\n")`, which causes the issue.
I verified the is fixed when I add the separator ("\n") directly to the string, and make the end separator as an empty string.
Alternative fix is to file lock the log file whenever log monitor reads / core worker writes, but it seems to be too heavy solution compared to its benefit.
Currently, there's no way to specify grpc port for dashboard agents (only http ports are allowed to be configured). This PR allows users to do that.
This also add more features;
- Format the port conflict error message better. I found the default worker ports are from 10002-19999, which spams the terminal. It is now formatted as 10002-19999.
- Add dashboard http port to the port conflict list.
`test_multi_node_3` failed because we kill the raylet before the cluster is up which leads the raylet to become a zombie process. This fix wait until the cluster up and kill it.
Resubmit the PR https://github.com/ray-project/ray/pull/19936
I've figure out that the test case `//rllib:tests/test_gpus::test_gpus_in_local_mode` failed due to deadlock in local mode.
In local mode, if the user code submits another task during the executing of current task, the `CoreWorker::actor_task_mutex_` may cause deadlock.
The solution is quite simple, release the lock before executing task in local mode.
In the commit 7c2f61c76c:
1. Release the lock in local mode to fix the bug. @scv119
2. `test_local_mode_deadlock` added to cover the case. @rkooo567
3. Left a trivial change in `rllib/tests/test_gpus.py` to make the `RAY_CI_RLLIB_DIRECTLY_AFFECTED ` to take effect.
This is the first PR of a series of Ray Dataset and Pandas integration improvements.
This PR refactors `ArrowRow`, `ArrowBlockBuilder`, `ArrowBlockAccessor` by extracting base classes `TableRow`, `TableBlockBuilder`, `TableBlockAccessor`, which can then be inherited for pandas DataFrame support in the next PR.
In our code, we wish to have control over when GC runs. We do this by `gc.disable()` and then manually triggering GC at moments that work for us. This does not work if Ray re-enables GC.
Co-authored-by: hauntsaninja <>
Added hyperameters to the concetp section since it's important to explain what they are and added diagrams help readeer visualize the difference between model and hyperparameters
Signed-off-by: Jules S.Damji <jules@anyscale.com>
Co-authored-by: Jules S.Damji <jules@anyscale.com>
PR #19014 introduced the idea of a StartupToken to uniquely identify a worker via a counter. This PR:
- returns the Process and the StartupToken from StartWorkerProcess (previously only Process was returned)
- Change the starting_workers_to_tasks map to index via the StartupToken, which seems to fix the windows failures.
- Unskip the windows tests in test_basic_2.py
It seems once a fix to PR #18167 goes in, the starting_workers_to_tasks map will be removed, which should remove the need for the changes to StartWorkerProcess made in this PR.
Fix Remote code injection in Log4j
Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.
Check this refer: [CVE-2021-44228](https://github.com/advisories/GHSA-jfh8-c2jp-5v3q)
Currently if local lease request fails due to raylet death, direct_task_transport.cc will retry forever for driver.
With this PR, we treat grpc unavailable as non-retryable error (the assumption is that local grpc is always reliable and grpc unavailable error indicates that server is dead) and will just fail the task.
Note: this PR doesn't try to address a bigger problem: don't crash driver when local raylet dies. We have multiple places in the code that assumes the local raylet never fail and have CHECK_STATUS_OK for that. All these places need to be changed so we can properly propagate failures to the user.
`//python/ray/tests:test_client_reconnect` seems to only flake under GCS HA build. The client server starts to shutdown under injected failures, unlike the behavior without GCS KV or pubsub.
`//python/ray/tests:test_multi_node_3` seems to flake more often under GCS HA build, although it is still flaky without GCS HA feature flags. It seems raylet termination did not notify other processes properly.
Disable these two tests before they are fixed.
We erase the elements from object_id_refs_ in the method `RemoveLocalReferenceInternal()` which may cause iterator invalidation issue.
Note that, normally flatmap will not trigger any iterator invalidation except triggering `rehash()`. But in this case, we may remove other elements(not only the current iterator), so there is still a risk of it.
A worker can crash right after putting its return values into the object store. Then, the owner will receive the worker crashed error, but the return objects will still be in the remote object store. Later, if the task is retried, the worker will crash on [this line](https://github.com/ray-project/ray/blob/master/src/ray/core_worker/transport/direct_actor_transport.cc#L105) because the object already exists.
Another way this can happen is if a task has multiple return values, and one of those return values is transferred to another node. If the task is later re-executed on that node, the task will fail because of the same error.
This PR fixes the crash so that:
1. If an object already exists, we try to pin that copy. Ideally, we should destroy the old copy and create the new one to make sure that metadata like the owner address is in sync, but this is pretty complicated to do right now.
2. If the pinning fails, we store an OBJECT_LOST error to throw to the application.
3. On the raylet, we check whether we already have the object pinned, and only subscribe to the owner's eviction message if the object is not pinned.
4. Also fixes bugs in the analogous case for `ray.put` (previously this would hang, now the application will receive an error if a `ray.put` object already exists).
Dashboard contains resource reporter and actor subscribers. Dashboard agent has resource report publisher. So GCS pubsub needs to support these channel types.
Also refactor GCS AIO subscribers to have each subscriber per channel. This matches the API of GCS sync subscribers, and make subscribing with multiple channels easier.
This PR adds four staging nightly tests for gcs :
- many_actors
- many_tasks
- many_pgs
- many_nodes
These are benchmark tests that are highly related to gcs ha.
To make it easier to add tests, this PR also change e2e.py a little bit to include testing flags to app config.
This is part of redis removal. In this PR, if `RAY_gcs_storage=memory`, it'll use memory table instead of redis table.
The config setup has to be moved into GcsServer because with the memory table it's transistent.
Signed-off-by: Jules S.Damji jules@anyscale.com
Why are these changes needed?
The code snippet referenced a python function that was not defined, therefore the code snippet as is won't work. All complete or self-contained code in our docs should run.
The changes made were adding the undefined function, iterating over a list of different random large arrays to show the difference between local or distributed sort's execution time, and print them.
Closes#20960
Bazel has a rule for enforcing the version so we can just reuse that.
This redundant bazel version check logic in setup.py is also causing issue when building conda package, because conda has its own version of bazel and it doesn't support `--version`.