Commit graph

7304 commits

Author SHA1 Message Date
matthewdeng
bcec60d898
Revert "[data] set iter_batches default batch_size #26869 " (#26938)
This reverts commit b048c6f659.
2022-07-23 17:46:45 -07:00
Jian Xiao
da9581b746
GC the blocks that have been splitted during .split() if they are owned by consumer (#26902)
Eagerly GC blocks no longer needed to improve memory efficiency and reduce object spilling.
2022-07-23 16:41:22 -07:00
matthewdeng
b048c6f659
[data] set iter_batches default batch_size #26869
Why are these changes needed?
Consumers (e.g. Train) may expect generated batches to be of the same size. Prior to this change, the default behavior would be for each batch to be one block, which may be of different sizes.

Changes
Set default batch_size to 256. This was chosen to be a sensible default for training workloads, which is intentionally different from the existing default batch_size value for Dataset.map_batches.
Update docs for Dataset.iter_batches, Dataset.map_batches, and DatasetPipeline.iter_batches to be consistent.
Updated tests and examples to explicitly pass in batch_size=None as these tests were intentionally testing block iteration, and there are other tests that test explicit batch sizes.
2022-07-23 13:44:53 -07:00
Jiao
3dc4189d88
Revert "[air] remove unnecessary logs + improve repr for result (#26906)" (#26932) 2022-07-23 12:12:26 -07:00
Stephanie Wang
55a0f7bb2d
[core] ray.init defaults to an existing Ray instance if there is one (#26678)
ray.init() will currently start a new Ray instance even if one is already existing, which is very confusing if you are a new user trying to go from local development to a cluster. This PR changes it so that, when no address is specified, we first try to find an existing Ray cluster that was created through `ray start`. If none is found, we will start a new one.

This makes two changes to the ray.init() resolution order:
1. When `ray start` is called, the started cluster address was already written to a file called `/tmp/ray/ray_current_cluster`. For ray.init() and ray.init(address="auto"), we will first check this local file for an existing cluster address. The file is deleted on `ray stop`. If the file is empty, autodetect any running cluster (legacy behavior) if address="auto", or we will start a new local Ray instance if address=None.
2. When ray.init(address="local") is called, we will create a new local Ray instance, even if one is already existing. This behavior seems to be necessary mainly for `ray.client` use cases.

This also surfaces the logs about which Ray instance we are connecting to. Previously these were hidden because we didn't set up the log until after connecting to Ray. So now Ray will log one of the following messages during ray.init:
```
(Connecting to existing Ray cluster at address: <IP>...)
...connection...
(Started a local Ray cluster.| Connected to Ray Cluster.)( View the dashboard at <URL>)
```

Note that this changes the dashboard URL to be printed with `ray.init()` instead of when the dashboard is first started.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-07-23 11:27:22 -07:00
Siyuan (Ryans) Zhuang
1fa8ddb07a
[Workflow] Make sure no side effects during workflow resuming (#26918)
Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-23 11:04:07 -07:00
Richard Liaw
d79431e32c
[air] remove unnecessary logs + improve repr for result (#26906) 2022-07-23 01:15:13 -07:00
Siyuan (Ryans) Zhuang
be5476cd55
[Workflow] Update API stability (#26903)
* alpha API

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* mark exceptions as alpha APIs

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* update

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-23 00:15:34 -07:00
Eric Liang
c118373afe
[air] Simplify the local shuffle API (#26915)
Simplify the local shuffle API by removing a constraint in the args that we can calculate internally.
2022-07-22 23:31:58 -07:00
Chen Shen
042450d319
Revert "[Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684)" (#26921)
This reverts commit 0c139914bb.
2022-07-22 22:26:40 -07:00
clarng
170bde40a0
Mark local mode as deprecated with warning message about possible memory leak (#26855)
Mark local mode as deprecated with a warning message.

Related issue number
#24216
#26095
2022-07-22 22:12:23 -07:00
Kai Fricke
1f32cb95db
[air/tune] Add top-level imports for Tuner, TuneConfig, move CheckpointConfig (#26882) 2022-07-22 20:17:06 -07:00
Jiajun Yao
46a19c1e47
[RFC] [Usage Stats] Expose api to record extra usage tags (#26834)
Library authors can record extra usage tags via usage_lib.record_extra_usage_tag(key, value)

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-07-22 16:07:10 -07:00
Kai Fricke
77ba30d34e
[tune] Docs for custom command based syncer (awscli / gsutil) (#26879)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-07-22 15:28:53 -07:00
mwtian
aadd82dcbd
[Core] stop retrying after successfully created GCS client (#26788) 2022-07-22 12:43:46 -07:00
Steven Morad
259429bdc3
Bump gym dep to 0.24 (#26190)
Co-authored-by: Steven Morad <smorad@anyscale.com>
Co-authored-by: Avnish <avnishnarayan@gmail.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
2022-07-22 12:37:16 -07:00
Jiao
a03716e75f
[AIR][Serve] Add windows check for pd.DataFrame comparison #26889
n previous implementation #26821 we have windows failure suggesting we behave differently on windows regarding datatype conversion.

In our https://sourcegraph.com/github.com/ray-project/ray/-/blob/python/ray/data/tests/test_dataset.py?L577 regarding use of TensorArray we seem to rely on pd'sassert_frame_equal rather than manually comparing frames.

This PR adds a quick conditional on windows only to ignore dtype for now.
2022-07-22 12:36:40 -07:00
Sihan Wang
bca4b179ea
[Serve] Separate internal API and Public API (#26804) 2022-07-22 12:12:35 -07:00
Siyuan (Ryans) Zhuang
4b50ef6a28
[Workflow] Rename the argument of "workflow.get_output" (#26876)
* rename get_output

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>

* update doc

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-22 12:06:19 -07:00
SangBin Cho
95c6ff153b
[State Observability] Remove an unnecessary field from list workers (#26815)
Worker info is useless
2022-07-22 10:56:42 -07:00
Clark Zinzow
0c139914bb
[Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684)
This PR tries to automatically cast tensor columns to our TensorArray extension type when building Pandas blocks, logging a warning and falling back to the opaque object-typed column if the cast fails. This should allow users to remain mostly tensor extension agnostic.

TensorArray now eagerly validates the underlying tensor data, raising an error if e.g. the underlying ndarrays have heterogeneous shapes; previously, TensorArray wouldn't validate this on construction and would instead let failures happen downstream. This means that our internal TensorArray use needs to follow a try-except pattern, falling back to a plain NumPy object column.
2022-07-22 10:10:26 -07:00
Clark Zinzow
a29baf93c8
[Datasets] Add .iter_torch_batches() and .iter_tf_batches() APIs. (#26689)
This PR adds .iter_torch_batches() and .iter_tf_batches() convenience APIs, which takes care of ML framework tensor conversion, the narrow tensor waste for the .iter_batches() call ("numpy" format), and unifies batch formats around two options: a single tensor for simple/pure-tensor/single-column datasets, and a dictionary of tensors for multi-column datasets.
2022-07-22 10:09:36 -07:00
Edward Oakes
1fd2913abd
[serve] Refactor checkpointing to write ahead target state (#26797) 2022-07-22 09:59:09 -07:00
zcin
b856daebbd
[Serve] Fix Formatting of Error Messages printed in serve status (#26578) 2022-07-22 09:52:13 -07:00
Kai Fricke
6074300211
[air/tune] Add resume experiment options to Tuner.restore() (#26826) 2022-07-22 08:58:08 -07:00
Kai Fricke
0d3a533ff9
[tune] Introduce tune.with_resources() to specify function trainable resources (#26830)
We don't have a way to specify resource requirements with the Tuner() API. This PR introduces tune.with_resources() to attach a resource request to class and function trainables. In class trainables, it will override potential existing default resource requests.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-22 13:25:55 +01:00
Kai Fricke
c72c9cf9a4
[tune] Clean up ray.tune scope (remove stale objects in __all__) (#26829)
There are a few stale objects in ray.tune's `__all__` list, making `from ray.tune import *` fail.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-22 09:02:17 +01:00
Archit Kulkarni
c557c7877f
[runtime env] Change exception to warning for unexpected field (#26824)
Signed-off-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
2022-07-21 22:58:03 -07:00
Jian Xiao
8553df49bb
Make execution plan/blocklist aware of the memory ownership and who runs the plan (#26650)
Having the indicator about who's running the stage and who created a blocklist will enable the eager memory releasing.

This is an alternative with better abstraction to https://github.com/ray-project/ray/pull/26196.

Note: this doesn't work for Dataset.split() yet, will do in a followup PR.
2022-07-21 21:40:37 -07:00
Jiao
db027d86af
[P0][AIR] Fix train to serve notebooks (#26821)
Co-authored-by: Simon Mo <simon.mo@hey.com>
2022-07-21 18:04:13 -07:00
Chen Shen
de0d1fa4dc
[Data][Split optimization] don't generate empty blocks (#26768)
The current split_at_index might generate empty blocks and also trigger unnecessary split task. The empty blocks happens when there are duplicate split indices, or the split index falls at the block boundaries. The unnecessary split tasks are triggered when the split index falls at the block boundaries.

This PR fix that by checking if the split index is duplicated or falls at the boundaries of blocks. in that case, we could safely ignore those indices.
2022-07-21 16:54:32 -07:00
Sihan Wang
27f1532a15
[Serve] Promote graceful shutdown and health check (#26682) 2022-07-21 17:37:10 -05:00
Daniel Wen
3f099b515f
[ray.util.metrics] Raise error when histogram boundaries <= 0 (#26728)
Raises error when histogram has boundaries that includes 0.

Closes #26698
2022-07-21 15:33:07 -07:00
Simon Mo
3dd9c18c94
[Serve] Fix flaky test_controller_recover_and_delete (#26838) 2022-07-21 13:56:27 -07:00
Eric Liang
51ecc04ccb
[air] Don't request CPUs by default when use_gpu=True (#26805)
Signed-off-by: Eric Liang <ekhliang@gmail.com>

Requesting both CPUs and GPUs leads to unnecessary resource contention.
2022-07-21 12:48:18 -07:00
Clark Zinzow
da97efb585
[Datasets] Add Pandas-native groupby and sorting. (#26313)
This PR adds a Pandas-native implementation of groupby and sorting for Pandas blocks. Before this PR, we were converting to Arrow, doing groupbys + aggregations and sorting in Arrow land, and then converting back to Pandas; this to-from-Arrow conversion was happening both on the map side and the reduce side, which was very inefficient for Pandas blocks (many extra table copies). By adding Pandas-native groupby + sorting, we should see a decrease in memory utilization and faster performance when using the AIR preprocessors.
2022-07-21 11:04:13 -07:00
Cheng Su
94d50e7c57
[Datasets] Add AWS CLI info into S3 credential error messagee (#26789)
As followup of #26669 (comment), we want to add AWS CLI command information into S# credential error message, so users have a better idea to further debug the read issue.
2022-07-21 10:25:07 -07:00
mwtian
6acd0a4c9b
Allow grpcio >= 1.48 (#26765)
The previously observed Python grpc warning / logspam seems to have been fixed for grpcio >= 1.48. And users would like to upgrade beyond grpcio 1.43 for better M1 support. However, grpcio 1.48 has not been released yet, so there is still a risk this change needs to be reverted if any problem is discovered later with Ray nightly + grpcio 1.48.
2022-07-21 10:03:41 -07:00
matthewdeng
728e2b36d6
[train] set auto_transfer cuda device (#26819)
This sets the CUDA Stream on the correct device (and not the default one) when calling train.torch.prepare_data_loader(auto_transfer=True).

Signed-off-by: Matthew Deng <matt@anyscale.com>
2022-07-21 09:50:32 -07:00
Jiajun Yao
4da78c489a
Revert "[log_monitor] Always reopen files (#26730)" (#26831) 2022-07-21 08:40:15 -07:00
Archit Kulkarni
1aad5d2136
[Ray Client] [runtime env] Skip env hook in Ray client server (#26688)
Previously, using an env_hook with Ray Client would only execute the env_hook on the server side (a Ray cluster machine).  An env_hook defined on the client side would never be executed.  But the main problem is with the server-side env_hook.

Consider the simple example where the env_hook rewrites the `working_dir` or `py_modules` with a local directory.

Currently, when using Ray Client, the `working_dir` and `py_modules` are uploaded to the GCS before `ray.init()` is called on the server.   This is a fundamental constraint because the server-side driver script needs to be able to import modules from the `working_dir` or `py_modules`.  After the upload, these fields are overwritten with the URIs for the uploaded packages.  

After this happens, on the server side Ray expects the `working_dir` and `py_modules` fields to only contain GCS URIs.  So overwriting `working_dir` to be a local directory after this occurs doesn't make sense (and Ray will rightfully throw a RuntimeEnv validation error here.)

If a cluster is set up with such an env hook, it will only work when `ray.init()` is called by the user on a cluster machine; i.e. it will only work in non-Ray Client cases.  If a user ever wants to use Ray Client with this cluster, it will be broken with no way to disable the env hook.  To remedy this, this PR disables the execution of the env_hook when using Ray Client.

We can consider adding support in the future for env_hooks to be executed on the client side when using Ray Client.
2022-07-21 10:10:11 -05:00
Jiajun Yao
3a48a79fd7
[Usage stats] Report total number of running jobs for usage stats purpose. (#26787)
- Report total number of running jobs
- Fix total number of nodes to include only alive nodes

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-07-21 01:37:58 -07:00
Tao Wang
62288724b2
[Python]More efficient node_table() in state.py (#26760)
This picks up https://github.com/ray-project/ray/pull/24088
The `get_node_table` already has resources of nodes, so we don't need to invoke `get_node_resource_info` for every node again. This change will reduce lots of rpc calls and make the api more efficient.
2022-07-21 10:35:46 +08:00
Balaji Veeramani
ac1d21027d
[AIR] Add framework-specific checkpoints (#26777) 2022-07-20 19:33:27 -07:00
Ricky Xu
6ee37d4ad7
[Core][State Observability] Fix is_alive column with wrong column type that breaks filtering (#26739)
is_alive column of the WorkerState has wrong column type that breaks filtering on is_alive
2022-07-20 16:38:15 -07:00
Alex Wu
9e7ddddff7
[log_monitor] Always reopen files (#26730)
This PR prevents the log monitor for keeping files open for long periods of time. In settings in which the autoscaler and head node are not tightly coupled, leaving files open implies that the inode for a file never changes, but depending on how fs synchronization between the autoscaler and head node containers works, the inode could change. Thus, we should keep try reopening files.

This is done via setting max open files to 1, so that it's easy to revert this behavior.

Co-authored-by: Alex <alex@anyscale.com>
2022-07-20 16:17:25 -07:00
Dmitri Gekhtman
d2ef342130
Try to create autoscaler log directory. (#26748)
Why are these changes needed?
Together with ray-project/kuberay#391, this should address #26064

Have the autoscaler try to make the Ray log directory before setting up logging.
ray-project/kuberay#391 should be good enough for this, but this PR makes things safer in case the KubeRay user overrides log mounts or something like that.
2022-07-20 14:00:37 -07:00
Siyuan (Ryans) Zhuang
55589d578c
[Core] Enhance docs of options (#26773)
* enhance docs

Signed-off-by: Siyuan Zhuang <suquark@gmail.com>
2022-07-20 13:30:36 -07:00
Siyuan (Ryans) Zhuang
0063d94166
[Core] Make "GetTimeoutError" a subclass of "TimeoutError" (#26771)
I am surprised by the fact that `GetTimeoutError` is not a subclass of `TimeoutError`, which is counter-intuitive and may discourage users from trying the timeout feature in `ray.get`, because you have to "guess" the correct error type. For most people, I believe the first error type in their mind would be `TimeoutError`.

This PR fixes this.
2022-07-20 14:37:39 -05:00
Sebastián Ramírez
8a45425e7b
Add extra types for datasets (#26648)
 Add extra types for datasets

Why are these changes needed?
By using Protocols for the definition of callable classes, the type parameter in datasets can be preserved for the methods that work on rows.

This allows getting typing information, editor support, in transformed datasets. It also enables editor support, including autocompletion, inside of the parameters inside of lambdas passed to things like ds.map() and ds.filter().

For example, after several transformations, ds.filter() still gets autocompletion for the x parameter in the lambda, the editor knows it's an int.

https://user-images.githubusercontent.com/1326112/179423609-6d77da23-5f5e-47ce-a17f-6eb0d06d82d0.png

I see there was a TODO comment to do this trick with the protocol, so Clark already had this idea. 💡 The good news is that it's not necessary to wait until only Python 3.8 is supported by using typing_extensions.
2022-07-20 12:16:24 -07:00