Commit graph

13494 commits

Author SHA1 Message Date
Yi Cheng
a435a04ead
[ci] Skip windows for test_actor_failure_async for now (#26696)
Signed-off-by: Yi Cheng <chengyidna@gmail.com>

windows test looks good.
2022-07-18 20:57:16 -07:00
Cheng Su
daa346450d
[Datasets] Handle any error code when encountering AWS S3 credential error (#26669)
As a followup of #26619 (comment) and #26619 (comment), here we change from PermissionError to OSError, to be consistent as original error, and also change function name from _handle_read_s3_files_error to _handle_read_os_error, which is more general that we can handle other file systems such as GCS in the future.

Also change to hanlde any error message with pattern AWS Error [code xxx]: No response body as new issue with error code 100 is raised in #26672 .
2022-07-18 17:57:44 -07:00
Sumanth Ratna
759966781f
[air] Allow users to use instances of ScalingConfig (#25712)
Co-authored-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-07-18 15:46:58 -07:00
Kai Fricke
66ca7b1fcf
[air/tuner] Expose number of errored/terminated trials in ResultGrid (#26655)
This introduces an easy interface to retrieve the number of errored and terminated (non-errored) trials from the result grid.

Previously `tune.run(raise_on_failed_trial)` could be used to raise a TuneError if at least one trial failed. We've removed this option to make sure we always get a return value. `ResultGrid.num_errored` will make it easy for users to identify if trials failed and react to it instead of the old try-catch loop.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-18 23:12:37 +01:00
Archit Kulkarni
c747dd1b70
[Serve] [CI] Skip serve:test_standalone2 on Windows (#26668) 2022-07-18 14:39:36 -07:00
Jiajun Yao
40a4777bc0
Mark chaos_dataset_shuffle_push_based_sort_1tb and chaos_dataset_shuffle_sort_1tb stable (#26677)
They passed for the past 7 runs.
2022-07-18 14:34:08 -07:00
Dmitri Gekhtman
c4160ec34b
[autoscaler][weekend nits] autoscaler.py type checking and other lint issues (#26646)
I run several linters, including mypy, in my local environment.
This is a PR of style nits for autoscaler.py meant to silence my linters.

This PR also adds a mypy check for autoscaler.py
2022-07-18 15:27:19 -05:00
Yi Cheng
df421ad499
[coe] Remove gil when submit actor and tasks to avoid deadlock for some cases. (#26421)
When submit task, GIL is not released due to this PR.
This cause a potential deadlock when actor died and got notified by GCS. In this case, the callback function submitted by GetAsync is going to execute some python function and the GIL is still hold by submit task. And submit task is blocking by a lock which is not released.

In the previous PR, it seems to fix some memory issue, but it's seems not there any more.


Signed-off-by: Yi Cheng <chengyidna@gmail.com>
2022-07-18 12:47:54 -07:00
Kai Fricke
00947fd949
[air/benchmarks] Add 4x1 GPU benchmark for Torch (#26562) 2022-07-18 12:14:10 -07:00
Kai Fricke
0bc5198c55
[air/tuner] Add more config arguments to Tuner (#26656)
The Tuner API is missing some arguments that tune.run() currently supports. This PR adds a number of them and adds a test to make sure they are correctly passed.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-18 19:32:10 +01:00
Cheng Su
0bb819f339
[Datasets] Add clearer actionable error message for AWS S3 credential error (#26619)
In https://github.com/ray-project/ray/issues/19799, and https://github.com/ray-project/ray/issues/24184, we found when using Datasets to read S3 file, if file's credential is not set up right, the `read_xxx` API would throw confusing error message with `AWS Error [code 15]: No response body` like below:

```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/chengsu/ray/python/ray/data/read_api.py", line 758, in read_binary_files
    return read_datasource(
  File "/Users/chengsu/ray/python/ray/data/read_api.py", line 267, in read_datasource
    requested_parallelism, min_safe_parallelism, read_tasks = ray.get(
  File "/Users/chengsu/ray/python/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/Users/chengsu/ray/python/ray/_private/worker.py", line 2196, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(PermissionError): ray::_get_read_tasks() (pid=80200, ip=127.0.0.1)
  File "pyarrow/_fs.pyx", line 439, in pyarrow._fs.FileSystem.get_file_info
  File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
  File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: When getting information for key 'trainaasdasd' in bucket 'balajis-tiny-imagenet': AWS Error [code 15]: No response body.
```

The error message mentions nothing related to file credential, so it's quite confusing. This PR is to catch the error and give a better error message:

```
ray::_get_read_tasks() (pid=80200, ip=127.0.0.1)
  File "/Users/chengsu/ray/python/ray/data/read_api.py", line 1127, in _get_read_tasks
    reader = ds.create_reader(**kwargs)
  File "/Users/chengsu/ray/python/ray/data/datasource/file_based_datasource.py", line 212, in create_reader
    return _FileBasedDatasourceReader(self, **kwargs)
  File "/Users/chengsu/ray/python/ray/data/datasource/file_based_datasource.py", line 350, in __init__
    self._paths, self._file_sizes = meta_provider.expand_paths(
  File "/Users/chengsu/ray/python/ray/data/datasource/file_meta_provider.py", line 173, in expand_paths
    _handle_read_s3_files_error(e, path)
  File "/Users/chengsu/ray/python/ray/data/datasource/file_meta_provider.py", line 342, in _handle_read_s3_files_error
    raise PermissionError(
PermissionError: Failing to read AWS S3 file(s): "balajis-tiny-imagenet/trainaasdasd". Please check file exists and has proper AWS credential. See https://docs.ray.io/en/latest/data/creating-datasets.html#reading-from-remote-storage for more information.
```
2022-07-18 11:26:01 -07:00
peterghaddad
725074d28b
[Client] Support configuring request metadata for Ray client gRPC (#25946)
Allows setting headers for Ray Client's gRPC connection using `ray.init(_metadata=[()])`.
2022-07-18 11:10:24 -07:00
matthewdeng
6670708010
[air] add placement group max CPU to data benchmark (#26649)
Set experimental `_max_cpu_fraction_per_node` to prevent deadlock.

This should technically be a no-op with the SPREAD strategy.
2022-07-18 10:34:40 -07:00
Jiajun Yao
1b2b526a2b
Fix windows buildkite (#26615)
- Stop using dot command to run ci.sh script: it doesn't fail the build if the command fails for windows and is generally dangerous since it will make unexpected changes to the current shell.
- Fix uncovered windows build issues.
2022-07-18 09:15:49 -07:00
Sihan Wang
1991029a19
[Serve] Remove EXPERIMENTAL inside the comments for user config (#26521) 2022-07-18 09:11:32 -07:00
Artur Niederfahrenhorst
0ce3bc5e48
[RLlib] Add/reorder Args of Prioritized/MixIn MultiAgentReplayBuffer. (#26428) 2022-07-18 18:04:03 +02:00
Qing Wang
a405e1b034
Add Tao as Java worker code owner. (#26596) 2022-07-18 14:03:25 +08:00
Chen Shen
b20f5f51df
[Air][Data] Don't promote locality_hints for split (#26647)
Why are these changes needed?
Since locality_hints is an experimental feature, we stop promoting it in doc and don't enable it in AIR. See #26641 for more context
2022-07-17 22:18:30 -07:00
Chen Shen
5ce06ce2c4
[Data][split] use split_at_indices for equal split without locality hints (#26641)
This PR replaces dataset.split(.., equal=True) implementation by dataset.split_at_indices() . My experiments (the script
) showed that dataset.split_at_indices() have more predictable performance than the dataset.split(…)

Concretely: on 10 m5.4xlarge nodes with 5000 iops disk

calling ds.split(81) on 200GB dataset with 400 blocks: the split takes 20-40 seconds, split_at_indices takes ~12 seconds.

calling ds.split(163) on 200GB dataset with 400 blocks, the split takes 40-100 seconds, split_at_indices takes ~24 seconds.

I don’t have much insight of dataset.split implementation, but with dataset.split_at_indices() we are just doing SPREAD to num_split_at_indices tasks, which yield much stable performance.

Note: clean up the usage of experimental locality_hints in #26647
2022-07-17 22:17:47 -07:00
Jiao
98a07920d3
[AIR][CUJ] Make distributing training benchmark at silver tier (#26640) 2022-07-17 22:07:09 -07:00
Jules S. Damji
55368402ee
added summary why and when to use bulk vs streaming data ingest (#26637) 2022-07-17 18:46:58 -07:00
Eric Liang
12825fc5aa
[air] Add a warning if no CPUs are reserved for dataset execution (#26643) 2022-07-17 16:33:51 -07:00
Clark Zinzow
864af14f41
[Datasets] [Local Shuffle - 1/N] Add local shuffling option. (#26094)
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Matthew Deng <matt@anyscale.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-17 16:21:14 -07:00
Rohan Potdar
38c9e1d52a
[RLlib]: Fix OPE trainables (#26279)
Co-authored-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
2022-07-17 14:25:53 -07:00
kourosh hakhamaneshi
569fe01096
[RLlib] improved unittests for dataset_reader and fixed bugs (#26458) 2022-07-17 13:38:15 -07:00
Eric Liang
400330e9c0
[air] Add _max_cpu_fraction_per_node to ScalingConfig and documentation (#26634) 2022-07-16 21:55:51 -07:00
Chen Shen
feb53d01ab
spread-split-tasks (#26638)
My experiments (the script
) showed that dataset.split_at_indices() with SPREAD tasks have more predictable performance

Concretely: on 10 m5.4xlarge nodes with 5000 iops disk
calling ds.split_at_indices(81) on 200GB dataset with 400 blocks: the split_at_indices without this PR takes 7-19 seconds, split_at_indices with SPREAD takes 7-12 seconds.
2022-07-16 21:34:35 -07:00
Amog Kamsetty
3a345a470c
[AIR/Docs] Add Predictor Docs (#25833) 2022-07-16 21:14:21 -07:00
Jiao
77e2ef2eb6
[AIR] Update Torch benchmarks with documentation (#26631)
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-16 17:58:21 -07:00
Eric Liang
ef091c382e
[data] Add warnings when DatasetPipelines are under-parallelized or using too much memory (#26592)
Currently, it's not very easy to figure out why a DatasetPipeline may be underperforming. Add some warnings to help guide the user. As a next step, we can try to default to a good pipeline setting based on these constraints.
2022-07-16 17:38:52 -07:00
Eric Liang
0855bcb77e
[air] Use SPREAD strategy by default and don't special case it in benchmarks (#26633) 2022-07-16 17:37:06 -07:00
M Waleed Kadous
7c32993c15
[core/docs]Add a new section under Ray Core called Ray Gotchas (#26624)
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-16 16:53:01 -07:00
Antoni Baum
fb6f3cf708
[AIR/Docs] Small improvements to Train user guide (#26577)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-07-16 16:51:17 -07:00
Eric Liang
6217138eb0
[docs] Move AIR benchmarks to top level (#26632) 2022-07-16 15:34:31 -07:00
Nikita Vemuri
3a3e6bb60b
[tune] Add external hooks in WandbLoggerCallback (#26617)
This is an experimental feature, so the following changes are added only to the WandbLoggerCallback. We are planning to collect feedback about usage and accordingly update or add these changes to the other W&B integration interfaces.

    Allow reading the W&B project name and group name from environment variable if not already passed to callback
    Add external hooks to fetch W&B API key, and to process any information about W&B run


Signed-off-by: Nikita Vemuri <nikitavemuri@gmail.com>
2022-07-16 22:35:53 +01:00
truelegion47
5bd8d121b2
Add type validation for ray.autoscaler.sdk.request_resources()(#26626)
Adds type validation to ray.autoscaler.sdk.request_resources().
2022-07-16 12:40:05 -07:00
Philipp Moritz
081bbfbff1
[Examples] Test OCR example in documentation tests (#26482)
Make sure the OCR example is tested in documentation after we discovered that example notebooks are not tested in CI.

Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
2022-07-16 10:51:28 -07:00
Eric Liang
605bc29f11
[air/predictors] Allow creating Predictor directly from a UDF (#26603) 2022-07-16 10:48:09 -07:00
Richard Liaw
799311b2f7
[air/docs] update examples to remove pandas again (#26598) 2022-07-16 08:40:44 -07:00
Jiao
196e52ad7c
[AIR][CUJ] E2E Pytorch training (#26621) 2022-07-16 08:23:19 -07:00
Jiao
988ffd494b
[AIR][CUJ] Add GPU bench prediction benchmark (#26614) 2022-07-16 08:22:37 -07:00
Clark Zinzow
fb54679a23
[Datasets] Refactor split_at_indices() to minimize number of split tasks and data movement. (#26363)
The current Dataset.split_at_indices() implementation suffers from O(n^2) memory usage in the small-split case (see issue) due to recursive splitting of the same blocks. This PR implements a split_at_indices() algorithm that minimizes the number of split tasks and data movement while ensuring that at most one block is used in each split task, for the sake of memory stability. Co-authored-by: scv119 <scv119@gmail.com>
2022-07-16 04:48:44 -07:00
SangBin Cho
0f0102666a
[Core] Support max cpu allocation per node for placement group scheduling (#26397)
The PR adds a new experimental flag to the placement group API to avoid placement group taking all cpus on each node. It is used internally by Air to avoid placement group (created by Tune) is using all CPU resources which are needed for dataset
2022-07-16 01:47:30 -07:00
Balaji Veeramani
34cf1f17ea
[Datasets] Add ImageFolderDatasource (#24641)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-15 22:43:23 -07:00
matthewdeng
e3a096f412
[air] add bulk ingest benchmarks (#26618) 2022-07-15 22:01:23 -07:00
matthewdeng
9256668b90
Revert "[Datasets] Explicitly define Dataset-like APIs in DatasetPipeline class (#26394)" (#26625) 2022-07-15 21:10:59 -07:00
Eric Liang
cf980c3020
[data] Refactor all to all op implementations into a separate file (#26585) 2022-07-15 18:17:48 -07:00
Cheng Su
fea94dc976
[Datasets] Explicitly define Dataset-like APIs in DatasetPipeline class (#26394)
This PR is to resolve #20888, where users have concern for the dataset-like methods used in dataset pipeline (such as map_batches, random_shuffle_each_window, etc). The reason is currently we define those dataset-like methods implicitly through Python setattr/getattr, to delegate the real work from dataset pipelien to dataset. This does not work very well with external developers/users if they want to navigate to the definition of method, or determine the method's return value data type.

So this PR is to explicitly define every dataset-like APIs in dataset pipeline class. This gives us a view of how much code we need to duplicate in upper bound. If we go with this direction, this means whenever we update or add a new method in Dataset, we need to update or add the same in DatasetPipeline.
2022-07-15 16:12:27 -07:00
Sihan Wang
09a6e5336a
[Serve][Part2] Migrate the tests to use deployment graph api (#26507) 2022-07-15 15:48:43 -07:00
Simon Mo
63d3ccf81e
[Serve] Default to EveryNode when starting Serve from REST API (#26588) 2022-07-15 15:47:54 -07:00