The previously observed Python grpc warning / logspam seems to have been fixed for grpcio >= 1.48. And users would like to upgrade beyond grpcio 1.43 for better M1 support. However, grpcio 1.48 has not been released yet, so there is still a risk this change needs to be reverted if any problem is discovered later with Ray nightly + grpcio 1.48.
This sets the CUDA Stream on the correct device (and not the default one) when calling train.torch.prepare_data_loader(auto_transfer=True).
Signed-off-by: Matthew Deng <matt@anyscale.com>
Previously, using an env_hook with Ray Client would only execute the env_hook on the server side (a Ray cluster machine). An env_hook defined on the client side would never be executed. But the main problem is with the server-side env_hook.
Consider the simple example where the env_hook rewrites the `working_dir` or `py_modules` with a local directory.
Currently, when using Ray Client, the `working_dir` and `py_modules` are uploaded to the GCS before `ray.init()` is called on the server. This is a fundamental constraint because the server-side driver script needs to be able to import modules from the `working_dir` or `py_modules`. After the upload, these fields are overwritten with the URIs for the uploaded packages.
After this happens, on the server side Ray expects the `working_dir` and `py_modules` fields to only contain GCS URIs. So overwriting `working_dir` to be a local directory after this occurs doesn't make sense (and Ray will rightfully throw a RuntimeEnv validation error here.)
If a cluster is set up with such an env hook, it will only work when `ray.init()` is called by the user on a cluster machine; i.e. it will only work in non-Ray Client cases. If a user ever wants to use Ray Client with this cluster, it will be broken with no way to disable the env hook. To remedy this, this PR disables the execution of the env_hook when using Ray Client.
We can consider adding support in the future for env_hooks to be executed on the client side when using Ray Client.
This picks up https://github.com/ray-project/ray/pull/24088
The `get_node_table` already has resources of nodes, so we don't need to invoke `get_node_resource_info` for every node again. This change will reduce lots of rpc calls and make the api more efficient.
This PR prevents the log monitor for keeping files open for long periods of time. In settings in which the autoscaler and head node are not tightly coupled, leaving files open implies that the inode for a file never changes, but depending on how fs synchronization between the autoscaler and head node containers works, the inode could change. Thus, we should keep try reopening files.
This is done via setting max open files to 1, so that it's easy to revert this behavior.
Co-authored-by: Alex <alex@anyscale.com>
Why are these changes needed?
Together with ray-project/kuberay#391, this should address #26064
Have the autoscaler try to make the Ray log directory before setting up logging.
ray-project/kuberay#391 should be good enough for this, but this PR makes things safer in case the KubeRay user overrides log mounts or something like that.
I am surprised by the fact that `GetTimeoutError` is not a subclass of `TimeoutError`, which is counter-intuitive and may discourage users from trying the timeout feature in `ray.get`, because you have to "guess" the correct error type. For most people, I believe the first error type in their mind would be `TimeoutError`.
This PR fixes this.
✨ Add extra types for datasets
Why are these changes needed?
By using Protocols for the definition of callable classes, the type parameter in datasets can be preserved for the methods that work on rows.
This allows getting typing information, editor support, in transformed datasets. It also enables editor support, including autocompletion, inside of the parameters inside of lambdas passed to things like ds.map() and ds.filter().
For example, after several transformations, ds.filter() still gets autocompletion for the x parameter in the lambda, the editor knows it's an int.
https://user-images.githubusercontent.com/1326112/179423609-6d77da23-5f5e-47ce-a17f-6eb0d06d82d0.png
I see there was a TODO comment to do this trick with the protocol, so Clark already had this idea. 💡 The good news is that it's not necessary to wait until only Python 3.8 is supported by using typing_extensions.
Currently replicas for each deployment are packed based on Ray's default scheduling policy. This is problematic when node failures occur because a given deployment may have all of its replicas crash at once.
This PR changes the default behavior to a soft SPREAD. If the replicas can be spread based on current resources, they will be, else they will still be placed.
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
NOTE: tabulate is copied/pasted to the codebase for table formatting.
This PR changes the default layout to be the table format for both summary and list APIs.
* Revert "Revert "Bump pytest from 5.4.3 to 7.0.1""
This reverts commit ab10890e90.
Signed-off-by: Riatre Foo <foo@riat.re>
* Fix missing test data files dependency in rllib/BUILD
See # 26334 and # 26517 for context.
Once this is in, it should be good to roll-forwrad again.
Signed-off-by: Riatre Foo <foo@riat.re>
* debug: run all tests
Signed-off-by: Riatre Foo <foo@riat.re>
* Revert "debug: run all tests"
This reverts commit 0c5e796b0eb437d64922f66749c61b0412486970.
Signed-off-by: Riatre Foo <foo@riat.re>
* fix new tests since last rebase
Signed-off-by: Riatre Foo <foo@riat.re>
As a followup of #26619 (comment) and #26619 (comment), here we change from PermissionError to OSError, to be consistent as original error, and also change function name from _handle_read_s3_files_error to _handle_read_os_error, which is more general that we can handle other file systems such as GCS in the future.
Also change to hanlde any error message with pattern AWS Error [code xxx]: No response body as new issue with error code 100 is raised in #26672 .
This introduces an easy interface to retrieve the number of errored and terminated (non-errored) trials from the result grid.
Previously `tune.run(raise_on_failed_trial)` could be used to raise a TuneError if at least one trial failed. We've removed this option to make sure we always get a return value. `ResultGrid.num_errored` will make it easy for users to identify if trials failed and react to it instead of the old try-catch loop.
Signed-off-by: Kai Fricke <kai@anyscale.com>
I run several linters, including mypy, in my local environment.
This is a PR of style nits for autoscaler.py meant to silence my linters.
This PR also adds a mypy check for autoscaler.py
When submit task, GIL is not released due to this PR.
This cause a potential deadlock when actor died and got notified by GCS. In this case, the callback function submitted by GetAsync is going to execute some python function and the GIL is still hold by submit task. And submit task is blocking by a lock which is not released.
In the previous PR, it seems to fix some memory issue, but it's seems not there any more.
Signed-off-by: Yi Cheng <chengyidna@gmail.com>
The Tuner API is missing some arguments that tune.run() currently supports. This PR adds a number of them and adds a test to make sure they are correctly passed.
Signed-off-by: Kai Fricke <kai@anyscale.com>
In https://github.com/ray-project/ray/issues/19799, and https://github.com/ray-project/ray/issues/24184, we found when using Datasets to read S3 file, if file's credential is not set up right, the `read_xxx` API would throw confusing error message with `AWS Error [code 15]: No response body` like below:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/chengsu/ray/python/ray/data/read_api.py", line 758, in read_binary_files
return read_datasource(
File "/Users/chengsu/ray/python/ray/data/read_api.py", line 267, in read_datasource
requested_parallelism, min_safe_parallelism, read_tasks = ray.get(
File "/Users/chengsu/ray/python/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/Users/chengsu/ray/python/ray/_private/worker.py", line 2196, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(PermissionError): ray::_get_read_tasks() (pid=80200, ip=127.0.0.1)
File "pyarrow/_fs.pyx", line 439, in pyarrow._fs.FileSystem.get_file_info
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: When getting information for key 'trainaasdasd' in bucket 'balajis-tiny-imagenet': AWS Error [code 15]: No response body.
```
The error message mentions nothing related to file credential, so it's quite confusing. This PR is to catch the error and give a better error message:
```
ray::_get_read_tasks() (pid=80200, ip=127.0.0.1)
File "/Users/chengsu/ray/python/ray/data/read_api.py", line 1127, in _get_read_tasks
reader = ds.create_reader(**kwargs)
File "/Users/chengsu/ray/python/ray/data/datasource/file_based_datasource.py", line 212, in create_reader
return _FileBasedDatasourceReader(self, **kwargs)
File "/Users/chengsu/ray/python/ray/data/datasource/file_based_datasource.py", line 350, in __init__
self._paths, self._file_sizes = meta_provider.expand_paths(
File "/Users/chengsu/ray/python/ray/data/datasource/file_meta_provider.py", line 173, in expand_paths
_handle_read_s3_files_error(e, path)
File "/Users/chengsu/ray/python/ray/data/datasource/file_meta_provider.py", line 342, in _handle_read_s3_files_error
raise PermissionError(
PermissionError: Failing to read AWS S3 file(s): "balajis-tiny-imagenet/trainaasdasd". Please check file exists and has proper AWS credential. See https://docs.ray.io/en/latest/data/creating-datasets.html#reading-from-remote-storage for more information.
```
Why are these changes needed?
Since locality_hints is an experimental feature, we stop promoting it in doc and don't enable it in AIR. See #26641 for more context
This PR replaces dataset.split(.., equal=True) implementation by dataset.split_at_indices() . My experiments (the script
) showed that dataset.split_at_indices() have more predictable performance than the dataset.split(…)
Concretely: on 10 m5.4xlarge nodes with 5000 iops disk
calling ds.split(81) on 200GB dataset with 400 blocks: the split takes 20-40 seconds, split_at_indices takes ~12 seconds.
calling ds.split(163) on 200GB dataset with 400 blocks, the split takes 40-100 seconds, split_at_indices takes ~24 seconds.
I don’t have much insight of dataset.split implementation, but with dataset.split_at_indices() we are just doing SPREAD to num_split_at_indices tasks, which yield much stable performance.
Note: clean up the usage of experimental locality_hints in #26647
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Matthew Deng <matt@anyscale.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
My experiments (the script
) showed that dataset.split_at_indices() with SPREAD tasks have more predictable performance
Concretely: on 10 m5.4xlarge nodes with 5000 iops disk
calling ds.split_at_indices(81) on 200GB dataset with 400 blocks: the split_at_indices without this PR takes 7-19 seconds, split_at_indices with SPREAD takes 7-12 seconds.
Currently, it's not very easy to figure out why a DatasetPipeline may be underperforming. Add some warnings to help guide the user. As a next step, we can try to default to a good pipeline setting based on these constraints.
This is an experimental feature, so the following changes are added only to the WandbLoggerCallback. We are planning to collect feedback about usage and accordingly update or add these changes to the other W&B integration interfaces.
Allow reading the W&B project name and group name from environment variable if not already passed to callback
Add external hooks to fetch W&B API key, and to process any information about W&B run
Signed-off-by: Nikita Vemuri <nikitavemuri@gmail.com>