Treats failures of provider.internal_ip during node drain as non-fatal.
For example, if a node is deleted by a third party between the time it's scheduled for termination and drained, there will now be no error on GCP.
Closes#21151
We fix the issue that it's unable to specify the concurrency group name of an actor task at runtime with the following usage:
```python
a.f2.options(concurrency_group="compute").remote()
```
This PR adds documentation for Workflow Metadata, which we recently added support in https://github.com/ray-project/ray/pull/19372.
Co-authored-by: Yi Cheng <74173148+iycheng@users.noreply.github.com>
This reverts commit 968f08607b.
It is breaking e2e tests where worker nodes cannot start. e.g.
```
Traceback (most recent call last):
File "/home/ray/anaconda3/bin/ray", line 8, in <module>
sys.exit(main())
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/scripts/scripts.py", line 1961, in main
return cli()
File "/home/ray/anaconda3/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/ray/anaconda3/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/ray/anaconda3/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/ray/anaconda3/lib/python3.7/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py", line 808, in wrapper
return f(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/scripts/scripts.py", line 733, in start
address_ip, password=redis_password)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 593, in create_redis_client
_, redis_ip_address, redis_port = validate_bootstrap_address(redis_address)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/services.py", line 494, in validate_bootstrap_address
raise ValueError("Malformed address. Expected '<host>:<port>'.")
ValueError: Malformed address. Expected '<host>:<port>'.
```
This PR unreverts #21115, fixing the handling of an `"auto"` address in the `RAY_ADDRESS` environment variable.
Co-authored-by: Mingwei Tian <mwtian@anyscale.com>
Dask default's to a disk-based shuffle even thought we're using a distributed scheduler, which appears to be resulting in dropped data since the filesystem isn't shared across nodes. Dask Distributed manually sets the shuffle algorithm in the global config to the task-based shuffle, which the Dask-on-Ray scheduler should probably do as well.
This PR adds a Dask config helper, `enable_dask_on_ray`, that sets Dask-on-Ray as the default scheduler along with changing the default shuffle to a task-based shuffle. The shuffle method can still be overridden by the user by manually specifying `df.set_index(shuffle="disk")`.
This change adds support for parsing `--address` as bootstrap address, and treating `--port` as GCS port, when using GCS for bootstrapping.
Not launching Redis in GCS bootstrapping mode, and using GCS to fetch initial cluster information, will be implemented in a subsequent change.
Also made some cleanups.
* updating azure autoscaler versions and backwards compatibility, and moving to azure-identity based authentication
* adding azure sdk rqmts for tests
* updating azure test requirements and adding wrapper function for azure sdk function resolution
* adding docstring to get_azure_sdk_function
Co-authored-by: Scott Graham <scgraham@microsoft.com>
Currently, the logic of uri reference in raylet is:
- For job level, add uri reference when job started and remove uri reference when job finished.
- For actor level, add and remove uri reference for detached actor only.
In this PR, the logic is optimized to:
- For job level, check if runtime env should be installed eagerly first. If true, add or remove uri reference.
- For actor level
* First, add uri reference for starting worker process to avoid that runtime env is gcd before worker registered.
* Second, add uri reference for echo worker thread of worker process. We will remove reference when worker disconnected.
- Besides, we move the instance of `RuntimeEnvManager` from `node_manager` to `worker_pool`.
- Enable the test `test_actor_level_gc` and add some tests in python and worker pool test.
GcsClient accepts only redis before. To make it work without redis, we need to be able to pass gcs address to gcs client as well.
In this PR, we add GCS related into into GcsClientOptions so that we can connect to the gcs directly with gcs address.
This PR is part of GCS bootstrap. In the following PR, we'll add functionality to set the correct GcsClientOptions based on flags.
This PR implements gRPC timeout for various blocking RPCs.
Previously, the timeout with promise didn't work properly because the client didn't cancel the timed out RPCs. This PR will properly implement RPC timeout.
This PR supports;
- Blocking RPCs for core APIs, creating / getting / removing actor + pg.
- Internal KV ops
The global state accessor also has the infinite blocking calls which we need to fix. But fixing them requires a huge refactoring, so this will be done in a separate PR.
Same for the placement group calls (they will be done in a separate PR)
Also, this means we can have scenario such as the client receives the DEADLINE EXCEEDED error, but the handler is invoked. Right now, this is not handled correctly in Ray. We should start thinking about how to handle these scenarios better.
Current logs API simply returns a str to unblock development and integration. We should add proper log streaming for better UX and external job manager integration.
Co-authored-by: Sven Mika <sven@anyscale.io>
Co-authored-by: sven1977 <svenmika1977@gmail.com>
Co-authored-by: Ed Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>
Uses a direct `pip install` instead of creating a conda env to make pip installs incremental to the cluster environment.
Separates the handling of `pip` and `conda` dependencies.
The new `pip` approach still works if only the base Ray is installed on the cluster and the user specifies libraries like "ray[serve]" in the `pip` field. The mechanism is as follows:
- We don't actually want to reinstall ray via pip, since this could lead to version mismatch issues. Instead, we want to use the Ray that's already installed in the cluster.
- So if "ray" was included by the user in the pip list, remove it
- If a library "ray[serve]" or "ray[tune, rllib]" was included in the pip list, remove it and replace it by its dependencies (e.g. "uvicorn", "requests", ..)
Co-authored-by: architkulkarni <arkulkar@gmail.com>
Co-authored-by: architkulkarni <architkulkarni@users.noreply.github.com>
The `results` with lesser tasks contains un-stable wait times, so increased the number of tasks in a hope for less noisy wait times. Minor in changes in assert comparisons have also been made for lesser error.
Currently, when the GCS RPC failed with gRPC unavailable error because the GCS is dead, it will retry forever.
b3a9d4d87d/src/ray/rpc/gcs_server/gcs_rpc_client.h (L57)
And it takes about 10 minutes to detect the GCS server failure, meaning if GCS is dead, users will notice in 10 minutes.
This can easily cause confusion that the cluster is hanging (since users are not that patient). Also, since GCS is not fault tolerant in OSS now, 10 minutes are too long timeout to detect GCS death.
This PR changes the value to 60 seconds, which I believe is much more reasonable (since this is the same value as our blocking RPC call timeout).
This fixes the bug where empty line is printed to the driver when multi threads are used.
e.g.,
```
2021-12-12 23:20:06,876 INFO worker.py:853 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379
(TESTER pid=12344)
(TESTER pid=12348)
```
## How does the current log work?
After actor initialization method is done, it prints ({repr(actor)}) to the stdout and stderr, which will be read by log monitor. Log monitor now can parse the actor repr and starts publishing log messages to the driver with the repr name.
## Problem
If actor init method contains a new background thread, when we call print({repr(actor)}), it seems flush() is not happening atomically. Based on my observation, it seems to flush the repr(actor) first, and then we flush the end="\n" (the default end parameter of print function) after.
Since log monitor never closes the file descriptor, it is possible it reads the log line before the end separator "\n" is flushed. That says, this sort of scenario can happen.
Expected
- `"repr(actor)\n"` is written to the log file. (which includes the default print end separator `"\n"`).
- Log monitor reads `repr(actor)\n` and publishes it.
Edge case
- `"repr(actor)"` is written to the log
- Log monitor publishes `repr(actor)`.
- `"\n"` is written to the log (end separator).
- Log monitor publishes `"\n"`.
Note that this is only happening when we print the actor repr "after" actor initialization. I think since new thread is running in the background, GIL comes in, and it creates the gap between `flush(repr(actor))` and `flush("\n")`, which causes the issue.
I verified the is fixed when I add the separator ("\n") directly to the string, and make the end separator as an empty string.
Alternative fix is to file lock the log file whenever log monitor reads / core worker writes, but it seems to be too heavy solution compared to its benefit.
Currently, there's no way to specify grpc port for dashboard agents (only http ports are allowed to be configured). This PR allows users to do that.
This also add more features;
- Format the port conflict error message better. I found the default worker ports are from 10002-19999, which spams the terminal. It is now formatted as 10002-19999.
- Add dashboard http port to the port conflict list.
`test_multi_node_3` failed because we kill the raylet before the cluster is up which leads the raylet to become a zombie process. This fix wait until the cluster up and kill it.
Resubmit the PR https://github.com/ray-project/ray/pull/19936
I've figure out that the test case `//rllib:tests/test_gpus::test_gpus_in_local_mode` failed due to deadlock in local mode.
In local mode, if the user code submits another task during the executing of current task, the `CoreWorker::actor_task_mutex_` may cause deadlock.
The solution is quite simple, release the lock before executing task in local mode.
In the commit 7c2f61c76c:
1. Release the lock in local mode to fix the bug. @scv119
2. `test_local_mode_deadlock` added to cover the case. @rkooo567
3. Left a trivial change in `rllib/tests/test_gpus.py` to make the `RAY_CI_RLLIB_DIRECTLY_AFFECTED ` to take effect.
This is the first PR of a series of Ray Dataset and Pandas integration improvements.
This PR refactors `ArrowRow`, `ArrowBlockBuilder`, `ArrowBlockAccessor` by extracting base classes `TableRow`, `TableBlockBuilder`, `TableBlockAccessor`, which can then be inherited for pandas DataFrame support in the next PR.
In our code, we wish to have control over when GC runs. We do this by `gc.disable()` and then manually triggering GC at moments that work for us. This does not work if Ray re-enables GC.
Co-authored-by: hauntsaninja <>
PR #19014 introduced the idea of a StartupToken to uniquely identify a worker via a counter. This PR:
- returns the Process and the StartupToken from StartWorkerProcess (previously only Process was returned)
- Change the starting_workers_to_tasks map to index via the StartupToken, which seems to fix the windows failures.
- Unskip the windows tests in test_basic_2.py
It seems once a fix to PR #18167 goes in, the starting_workers_to_tasks map will be removed, which should remove the need for the changes to StartWorkerProcess made in this PR.