Commit graph

13564 commits

Author SHA1 Message Date
SangBin Cho
37f4692aa8
[State Observability] Fix "No result for get crashing the formatting" and "Filtering not handled properly when key missing in the datum" #26881
Fix two issues

No result for get crashing the formatting
Filtering not handled properly when key missing in the datum
2022-07-23 21:33:07 -07:00
Eric Liang
d692a55018
[data] Make lazy mode non-experimental (#26934) 2022-07-23 21:28:31 -07:00
Ishant Mrinal
b32c784c7f
[RLLib] RE3 exploration algorithm TF2 framework support (#25221) 2022-07-23 18:05:01 -07:00
matthewdeng
bcec60d898
Revert "[data] set iter_batches default batch_size #26869 " (#26938)
This reverts commit b048c6f659.
2022-07-23 17:46:45 -07:00
Jian Xiao
da9581b746
GC the blocks that have been splitted during .split() if they are owned by consumer (#26902)
Eagerly GC blocks no longer needed to improve memory efficiency and reduce object spilling.
2022-07-23 16:41:22 -07:00
matthewdeng
b048c6f659
[data] set iter_batches default batch_size #26869
Why are these changes needed?
Consumers (e.g. Train) may expect generated batches to be of the same size. Prior to this change, the default behavior would be for each batch to be one block, which may be of different sizes.

Changes
Set default batch_size to 256. This was chosen to be a sensible default for training workloads, which is intentionally different from the existing default batch_size value for Dataset.map_batches.
Update docs for Dataset.iter_batches, Dataset.map_batches, and DatasetPipeline.iter_batches to be consistent.
Updated tests and examples to explicitly pass in batch_size=None as these tests were intentionally testing block iteration, and there are other tests that test explicit batch sizes.
2022-07-23 13:44:53 -07:00
Jiao
3dc4189d88
Revert "[air] remove unnecessary logs + improve repr for result (#26906)" (#26932) 2022-07-23 12:12:26 -07:00
Stephanie Wang
55a0f7bb2d
[core] ray.init defaults to an existing Ray instance if there is one (#26678)
ray.init() will currently start a new Ray instance even if one is already existing, which is very confusing if you are a new user trying to go from local development to a cluster. This PR changes it so that, when no address is specified, we first try to find an existing Ray cluster that was created through `ray start`. If none is found, we will start a new one.

This makes two changes to the ray.init() resolution order:
1. When `ray start` is called, the started cluster address was already written to a file called `/tmp/ray/ray_current_cluster`. For ray.init() and ray.init(address="auto"), we will first check this local file for an existing cluster address. The file is deleted on `ray stop`. If the file is empty, autodetect any running cluster (legacy behavior) if address="auto", or we will start a new local Ray instance if address=None.
2. When ray.init(address="local") is called, we will create a new local Ray instance, even if one is already existing. This behavior seems to be necessary mainly for `ray.client` use cases.

This also surfaces the logs about which Ray instance we are connecting to. Previously these were hidden because we didn't set up the log until after connecting to Ray. So now Ray will log one of the following messages during ray.init:
```
(Connecting to existing Ray cluster at address: <IP>...)
...connection...
(Started a local Ray cluster.| Connected to Ray Cluster.)( View the dashboard at <URL>)
```

Note that this changes the dashboard URL to be printed with `ray.init()` instead of when the dashboard is first started.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-07-23 11:27:22 -07:00
Siyuan (Ryans) Zhuang
1fa8ddb07a
[Workflow] Make sure no side effects during workflow resuming (#26918)
Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-23 11:04:07 -07:00
Avnish Narayan
a50a81a13a
Revert "[RLlib] Fix apex breakout release test performance. (#26867)" (#26927) 2022-07-23 17:27:50 +02:00
Rohan Potdar
a53bbe49bf
[RLlib]: Raise deprecation warning in MARWIL OPE methods. (#26893) 2022-07-23 13:55:40 +02:00
Rohan Potdar
97bcf38ec0
[RLlib] Fix torch None conversion in torch_utils.py::convert_to_torch_tensor. (#26863) 2022-07-23 13:54:57 +02:00
Rohan Potdar
69f6b843da
[RLlib] Test output length in DatasetReader with default IOContext. (#26852) 2022-07-23 13:53:59 +02:00
Avnish Narayan
2cfd6c2e97
[RLlib] Fix apex breakout release test performance. (#26867) 2022-07-23 13:53:03 +02:00
Richard Liaw
96e8027c7e
[air] large tune/torch benchmark (#26763)
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-07-23 01:17:25 -07:00
Richard Liaw
d79431e32c
[air] remove unnecessary logs + improve repr for result (#26906) 2022-07-23 01:15:13 -07:00
Siyuan (Ryans) Zhuang
be5476cd55
[Workflow] Update API stability (#26903)
* alpha API

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* mark exceptions as alpha APIs

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* update

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-23 00:15:34 -07:00
Eric Liang
c118373afe
[air] Simplify the local shuffle API (#26915)
Simplify the local shuffle API by removing a constraint in the args that we can calculate internally.
2022-07-22 23:31:58 -07:00
Eric Liang
63a6c1dfac
[docs] Cleanup the Datasets key concept docs (#26908)
Clean up the Datasets key concept doc to be suitable for consumption by a beginner level user and improving the diagrams.
2022-07-22 23:30:54 -07:00
Chen Shen
042450d319
Revert "[Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684)" (#26921)
This reverts commit 0c139914bb.
2022-07-22 22:26:40 -07:00
clarng
170bde40a0
Mark local mode as deprecated with warning message about possible memory leak (#26855)
Mark local mode as deprecated with a warning message.

Related issue number
#24216
#26095
2022-07-22 22:12:23 -07:00
Kai Fricke
1f32cb95db
[air/tune] Add top-level imports for Tuner, TuneConfig, move CheckpointConfig (#26882) 2022-07-22 20:17:06 -07:00
Eric Liang
36c46e9686
[docs] Improve AIR table of contents titles (#26858) 2022-07-22 17:17:49 -07:00
Rohan Potdar
2f22262d39
[RLlib]: Fix SampleBatch.split_by_episode to use dones if episode id is not available (#26492) 2022-07-22 16:46:05 -07:00
Jiajun Yao
46a19c1e47
[RFC] [Usage Stats] Expose api to record extra usage tags (#26834)
Library authors can record extra usage tags via usage_lib.record_extra_usage_tag(key, value)

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-07-22 16:07:10 -07:00
Jiao
840b0478aa
[AIR CUJ] Add wait_for_nodes for 4x4 gpu test 2022-07-22 16:04:54 -07:00
Kai Fricke
77ba30d34e
[tune] Docs for custom command based syncer (awscli / gsutil) (#26879)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-07-22 15:28:53 -07:00
mwtian
aadd82dcbd
[Core] stop retrying after successfully created GCS client (#26788) 2022-07-22 12:43:46 -07:00
Steven Morad
259429bdc3
Bump gym dep to 0.24 (#26190)
Co-authored-by: Steven Morad <smorad@anyscale.com>
Co-authored-by: Avnish <avnishnarayan@gmail.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
2022-07-22 12:37:16 -07:00
Jiao
a03716e75f
[AIR][Serve] Add windows check for pd.DataFrame comparison #26889
n previous implementation #26821 we have windows failure suggesting we behave differently on windows regarding datatype conversion.

In our https://sourcegraph.com/github.com/ray-project/ray/-/blob/python/ray/data/tests/test_dataset.py?L577 regarding use of TensorArray we seem to rely on pd'sassert_frame_equal rather than manually comparing frames.

This PR adds a quick conditional on windows only to ignore dtype for now.
2022-07-22 12:36:40 -07:00
Sihan Wang
bca4b179ea
[Serve] Separate internal API and Public API (#26804) 2022-07-22 12:12:35 -07:00
Siyuan (Ryans) Zhuang
4b50ef6a28
[Workflow] Rename the argument of "workflow.get_output" (#26876)
* rename get_output

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* update doc

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-22 12:06:19 -07:00
Avnish Narayan
82395c4646
[RLlib] Put learning test into own folders (#26862)
Co-authored-by: Artur Niederfahrenhorst <artur@anyscale.com>
2022-07-22 11:20:47 -07:00
SangBin Cho
95c6ff153b
[State Observability] Remove an unnecessary field from list workers (#26815)
Worker info is useless
2022-07-22 10:56:42 -07:00
Avnish Narayan
2a0ef663c9
[rllib] Use compress observations where replay buffers and image obs are used in tuned examples (#26735) 2022-07-22 10:10:51 -07:00
Clark Zinzow
0c139914bb
[Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684)
This PR tries to automatically cast tensor columns to our TensorArray extension type when building Pandas blocks, logging a warning and falling back to the opaque object-typed column if the cast fails. This should allow users to remain mostly tensor extension agnostic.

TensorArray now eagerly validates the underlying tensor data, raising an error if e.g. the underlying ndarrays have heterogeneous shapes; previously, TensorArray wouldn't validate this on construction and would instead let failures happen downstream. This means that our internal TensorArray use needs to follow a try-except pattern, falling back to a plain NumPy object column.
2022-07-22 10:10:26 -07:00
Clark Zinzow
a29baf93c8
[Datasets] Add .iter_torch_batches() and .iter_tf_batches() APIs. (#26689)
This PR adds .iter_torch_batches() and .iter_tf_batches() convenience APIs, which takes care of ML framework tensor conversion, the narrow tensor waste for the .iter_batches() call ("numpy" format), and unifies batch formats around two options: a single tensor for simple/pure-tensor/single-column datasets, and a dictionary of tensors for multi-column datasets.
2022-07-22 10:09:36 -07:00
Edward Oakes
1fd2913abd
[serve] Refactor checkpointing to write ahead target state (#26797) 2022-07-22 09:59:09 -07:00
zcin
b856daebbd
[Serve] Fix Formatting of Error Messages printed in serve status (#26578) 2022-07-22 09:52:13 -07:00
Kai Fricke
6074300211
[air/tune] Add resume experiment options to Tuner.restore() (#26826) 2022-07-22 08:58:08 -07:00
Kai Fricke
0d3a533ff9
[tune] Introduce tune.with_resources() to specify function trainable resources (#26830)
We don't have a way to specify resource requirements with the Tuner() API. This PR introduces tune.with_resources() to attach a resource request to class and function trainables. In class trainables, it will override potential existing default resource requests.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-22 13:25:55 +01:00
Kai Fricke
c72c9cf9a4
[tune] Clean up ray.tune scope (remove stale objects in __all__) (#26829)
There are a few stale objects in ray.tune's `__all__` list, making `from ray.tune import *` fail.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-22 09:02:17 +01:00
Fabian Witter
dc2ad6c8b4
[RLlib] Fix ModelCatalog for nested complex inputs (#25620) 2022-07-22 00:45:25 -07:00
Archit Kulkarni
c557c7877f
[runtime env] Change exception to warning for unexpected field (#26824)
Signed-off-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
2022-07-21 22:58:03 -07:00
Jian Xiao
8553df49bb
Make execution plan/blocklist aware of the memory ownership and who runs the plan (#26650)
Having the indicator about who's running the stage and who created a blocklist will enable the eager memory releasing.

This is an alternative with better abstraction to https://github.com/ray-project/ray/pull/26196.

Note: this doesn't work for Dataset.split() yet, will do in a followup PR.
2022-07-21 21:40:37 -07:00
Avnish Narayan
67c0a69643
[Rllib] Fix broken cluster env launcher gym pinning (#26865) 2022-07-21 20:45:16 -07:00
Jun Gong
6c1acd1a2f
[RLlib] Quick state buffer connector fix (#26836) 2022-07-21 20:43:59 -07:00
Jun Gong
0bc560bd54
[RLlib] Make sure we step() after adding init_obs. (#26827) 2022-07-21 20:43:46 -07:00
Eric Liang
9272bcbbca
[docs] Add ecosystem map to AIR guide (#26859) 2022-07-21 19:06:47 -07:00
matthewdeng
14e2b2548c
[air] update remaining dict scaling_configs (#26856) 2022-07-21 18:55:21 -07:00