This change adds support for parsing `--address` as bootstrap address, and treating `--port` as GCS port, when using GCS for bootstrapping.
Not launching Redis in GCS bootstrapping mode, and using GCS to fetch initial cluster information, will be implemented in a subsequent change.
Also made some cleanups.
* updating azure autoscaler versions and backwards compatibility, and moving to azure-identity based authentication
* adding azure sdk rqmts for tests
* updating azure test requirements and adding wrapper function for azure sdk function resolution
* adding docstring to get_azure_sdk_function
Co-authored-by: Scott Graham <scgraham@microsoft.com>
Currently, the logic of uri reference in raylet is:
- For job level, add uri reference when job started and remove uri reference when job finished.
- For actor level, add and remove uri reference for detached actor only.
In this PR, the logic is optimized to:
- For job level, check if runtime env should be installed eagerly first. If true, add or remove uri reference.
- For actor level
* First, add uri reference for starting worker process to avoid that runtime env is gcd before worker registered.
* Second, add uri reference for echo worker thread of worker process. We will remove reference when worker disconnected.
- Besides, we move the instance of `RuntimeEnvManager` from `node_manager` to `worker_pool`.
- Enable the test `test_actor_level_gc` and add some tests in python and worker pool test.
GcsClient accepts only redis before. To make it work without redis, we need to be able to pass gcs address to gcs client as well.
In this PR, we add GCS related into into GcsClientOptions so that we can connect to the gcs directly with gcs address.
This PR is part of GCS bootstrap. In the following PR, we'll add functionality to set the correct GcsClientOptions based on flags.
The current resource reporting is run in OSS. Revert the change. For example it reported
InitialConfigResources: {node:172.31.45.118: 1.000000}, {object_store_memory: 468605759.960938 GiB},
For 10GB memory object_store.
This PR implements gRPC timeout for various blocking RPCs.
Previously, the timeout with promise didn't work properly because the client didn't cancel the timed out RPCs. This PR will properly implement RPC timeout.
This PR supports;
- Blocking RPCs for core APIs, creating / getting / removing actor + pg.
- Internal KV ops
The global state accessor also has the infinite blocking calls which we need to fix. But fixing them requires a huge refactoring, so this will be done in a separate PR.
Same for the placement group calls (they will be done in a separate PR)
Also, this means we can have scenario such as the client receives the DEADLINE EXCEEDED error, but the handler is invoked. Right now, this is not handled correctly in Ray. We should start thinking about how to handle these scenarios better.
This adds memory monitoring to scalability envelope tests so that we can compare the peak memory usage for both nonHA & HA.
NOTE: the current way of adding memory monitor is not great, and we should implement fixture to support this better, but that's not in progress yet.
Current logs API simply returns a str to unblock development and integration. We should add proper log streaming for better UX and external job manager integration.
Co-authored-by: Sven Mika <sven@anyscale.io>
Co-authored-by: sven1977 <svenmika1977@gmail.com>
Co-authored-by: Ed Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>
Uses a direct `pip install` instead of creating a conda env to make pip installs incremental to the cluster environment.
Separates the handling of `pip` and `conda` dependencies.
The new `pip` approach still works if only the base Ray is installed on the cluster and the user specifies libraries like "ray[serve]" in the `pip` field. The mechanism is as follows:
- We don't actually want to reinstall ray via pip, since this could lead to version mismatch issues. Instead, we want to use the Ray that's already installed in the cluster.
- So if "ray" was included by the user in the pip list, remove it
- If a library "ray[serve]" or "ray[tune, rllib]" was included in the pip list, remove it and replace it by its dependencies (e.g. "uvicorn", "requests", ..)
Co-authored-by: architkulkarni <arkulkar@gmail.com>
Co-authored-by: architkulkarni <architkulkarni@users.noreply.github.com>
The `results` with lesser tasks contains un-stable wait times, so increased the number of tasks in a hope for less noisy wait times. Minor in changes in assert comparisons have also been made for lesser error.
Before this PR, GcsActorManager::CreateActor() would replace actor's namespace by
actors's owner job's namespace, even if actor is created by user with a user specified
namespace. But in named_actors_, actor is set to use user specified namespace by
GcsActorManager::RegisterActor before CreateActor() is called, So that
GcsActorManager::DestroyActor failed to find actor from named_actors_ by owner job's
namespace to remove, hence reuse actor name in same namespace failed for same name actor
not removed by GcsActorManager::DestroyActor in named_actors_.
issue #20611
Currently, when the GCS RPC failed with gRPC unavailable error because the GCS is dead, it will retry forever.
b3a9d4d87d/src/ray/rpc/gcs_server/gcs_rpc_client.h (L57)
And it takes about 10 minutes to detect the GCS server failure, meaning if GCS is dead, users will notice in 10 minutes.
This can easily cause confusion that the cluster is hanging (since users are not that patient). Also, since GCS is not fault tolerant in OSS now, 10 minutes are too long timeout to detect GCS death.
This PR changes the value to 60 seconds, which I believe is much more reasonable (since this is the same value as our blocking RPC call timeout).
This fixes the bug where empty line is printed to the driver when multi threads are used.
e.g.,
```
2021-12-12 23:20:06,876 INFO worker.py:853 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379
(TESTER pid=12344)
(TESTER pid=12348)
```
## How does the current log work?
After actor initialization method is done, it prints ({repr(actor)}) to the stdout and stderr, which will be read by log monitor. Log monitor now can parse the actor repr and starts publishing log messages to the driver with the repr name.
## Problem
If actor init method contains a new background thread, when we call print({repr(actor)}), it seems flush() is not happening atomically. Based on my observation, it seems to flush the repr(actor) first, and then we flush the end="\n" (the default end parameter of print function) after.
Since log monitor never closes the file descriptor, it is possible it reads the log line before the end separator "\n" is flushed. That says, this sort of scenario can happen.
Expected
- `"repr(actor)\n"` is written to the log file. (which includes the default print end separator `"\n"`).
- Log monitor reads `repr(actor)\n` and publishes it.
Edge case
- `"repr(actor)"` is written to the log
- Log monitor publishes `repr(actor)`.
- `"\n"` is written to the log (end separator).
- Log monitor publishes `"\n"`.
Note that this is only happening when we print the actor repr "after" actor initialization. I think since new thread is running in the background, GIL comes in, and it creates the gap between `flush(repr(actor))` and `flush("\n")`, which causes the issue.
I verified the is fixed when I add the separator ("\n") directly to the string, and make the end separator as an empty string.
Alternative fix is to file lock the log file whenever log monitor reads / core worker writes, but it seems to be too heavy solution compared to its benefit.
Currently, there's no way to specify grpc port for dashboard agents (only http ports are allowed to be configured). This PR allows users to do that.
This also add more features;
- Format the port conflict error message better. I found the default worker ports are from 10002-19999, which spams the terminal. It is now formatted as 10002-19999.
- Add dashboard http port to the port conflict list.
`test_multi_node_3` failed because we kill the raylet before the cluster is up which leads the raylet to become a zombie process. This fix wait until the cluster up and kill it.