Commit graph

7525 commits

Author SHA1 Message Date
Peyton Murray
4d19c0222b
[AIR] Add rich notebook repr for DataParallelTrainer (#26335) 2022-08-16 08:51:14 -07:00
Dmitri Gekhtman
bceef503b2
[Kubernetes][docs] Restore legacy Ray operator migration discussion (#27841)
This PR restores notes for migration from the legacy Ray operator to the new KubeRay operator.

To avoid disrupting the flow of the Ray documentation, these notes are placed in a README accompanying the old operator's code.

These notes are linked from the new docs.

Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
2022-08-16 08:46:31 -07:00
xwjiang2010
91f506304d
[air] [checkpoint manager] handle nested metrics properly as scoring attribute. (#27715)
handle nested metrics properly as scoring attribute

Signed-off-by: xwjiang2010 <xwjiang2010@gmail.com>
2022-08-16 17:43:58 +02:00
Alex Wu
c2abfdb2f7
[autoscaler][observability] Observability into when/why nodes fail to launch (#27697)
This change adds launch failures to the recent failures section of ray status when a node provider provides structured error information. For node providers which don't provide this optional information, there is now change in behavior.

For reference, when trying to launch a node type with a quota issue, it looks like the following. InsufficientInstanceCapacity is the standard term for this issue..

```
======== Autoscaler status: 2022-08-11 22:22:10.735647 ========
Node status
---------------------------------------------------------------
Healthy:
 1 cpu_4_ondemand
Pending:
 quota, 1 launching
Recent failures:
 quota: InsufficientInstanceCapacity (last_attempt: 22:22:00)

Resources
---------------------------------------------------------------
Usage:
 0.0/4.0 CPU
 0.00/9.079 GiB memory
 0.00/4.539 GiB object_store_memory

Demands:
 (no resource demands)
```

```
available_node_types:
    cpu_4_ondemand:
        node_config:
            InstanceType: m4.xlarge
            ImageId: latest_dlami
        resources: {}
        min_workers: 0
        max_workers: 0
    quota:
        node_config:
            InstanceType: p4d.24xlarge
            ImageId: latest_dlami
        resources: {}
        min_workers: 1
        max_workers: 1
```
Co-authored-by: Alex <alex@anyscale.com>
2022-08-15 18:14:29 -07:00
Jiajun Yao
06ef4ab94e
Fix broken links in the code (#27873)
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-08-15 13:11:42 -07:00
Archit Kulkarni
058c239cf1
[runtime env] Test common failure scenarios (#25977)
Tests the following failure scenarios:
- Fail to upload data in `ray.init()` (`working_dir`, `py_modules`)
- Eager install fails in `ray.init()` for some other reason (bad `pip` package)
- Fail to download data from GCS (`working_dir`)

Improves the following error message cases:
- Return RuntimeEnvSetupError on failure to upload working_dir or py_modules
- Return RuntimeEnvSetupError on failure to download files from GCS during runtime env setup

Not covered in this PR:
- RPC to agent fails (This is extremely rare because the Raylet and agent are on the same node.)
- Agent is not started or dead (We don't need to worry about this because the Raylet fate shares with the agent.)

The approach is to use environment variables to induce failures in various places.  The alternative would be to refactor the packaging code to use dependency injection for the Internal KV client so that we can pass in a fake. I'm not sure how much of an improvement this would be.  I think we'd still have to set an environment variable to pass in the fake client, because these are essentially e2e tests of `ray.init()` and we don't have an API to pass it in.
2022-08-15 11:35:56 -05:00
SangBin Cho
d654636bfc
[Test] Fix broken test_base_trainer (#27855)
The test was written incorrectly. This root cause was that the trainer & worker both requires 1 CPU, meaning pg requires {CPU: 1} * 2 resources.

And when the max fraction is 0.001, we only allow up to 1 CPU for pg, so we cannot schedule the requested pgs in any case.
2022-08-15 07:50:18 -07:00
SangBin Cho
9ece110d27
[State Observability] Promote the API to alpha (#27788)
# Why are these changes needed?

- Promote APIs to PublicAPI(alpha)
- Change pre-alpha -> alpha
- Fix a bug ray_logs is displayed to ray --help

Release test result: #26610
Some APIs are subject to change at the beta stage (e.g., ray list jobs or ray logs).
2022-08-13 23:43:01 -07:00
SangBin Cho
999715ebec
[Core][Placement Group] Handling edge cases of max_cpu_fraction argument (#27035)
Why are these changes needed?
This PR fixes the edge cases when the max_cpu_fraction argument is used by the placement group. There was specifically an edge case where the placement group cannot be scheduled when a task or actor is scheduled and occupies the resource.

The original logic to decide if the bundle scheduling exceed CPU fraction was as follow.

Calculate max_reservable_cpus of the node.
Calculate currently_used_cpus + bundle_cpu_request (per bundle) == total_allocation of the node.
Don't schedule if total_allocation > max_reservable_cpus for the node.
However, the following scenarios caused issues because currently_used_cpus can include resources that are not allocated by placement groups (e.g., actors). As a result, when the actor was already occupying the resource, the total_allocation was incorrect. For example,

4 CPUs
0.999 max fraction (so it can reserve up to 3 CPUs)
1 Actor already created (1 CPU)
PG with CPU: 3
Now pg cannot be scheduled because total_allocation == 1 actor (1 CPU) + 3 bundles (3 CPUs) == 4 CPUs > 3 CPUs (max frac CPUs)
However, this should work because the pg can use up to 3 CPUs, and we have enough resources.
The root cause is that when we calculate the max_fraction, we should only take into account of resources allocated by bundles. To fix this, I change the logic as follow.

Calculate max_reservable_cpus of the node.
Calculate **currently_used_cpus_by_pg_bundles** + **bundle_cpu_request (sum of all bundles)** == total_allocation_from_pgs_and_bundles of the node.
Don't schedule if total_allocation_from_pgs_and_bundles > max_reservable_cpus for the node.
2022-08-12 17:40:11 -07:00
Sihan Wang
7e7c93f6ba
[Serve] Fix memory leak issue in serve inference (#27815) 2022-08-12 17:11:37 -07:00
zcin
8cb09a9fc5
Revert "Revert "[serve] Integrate and Document Bring-Your-Own Gradio Applications"" (#27662) 2022-08-12 15:12:20 -07:00
Clark Zinzow
f0404e00cd
[Core] [Hotfix] Change "task failed with unretryable exception" log statement to debug-level. (#27714)
Serve relies on being able to do quiet application-level retries, and this info-level logging is resulting in log spam hitting users. This PR demotes this log statement to debug-level to prevent this log spam.

Co-authored-by: simon-mo <simon.mo@hey.com>
2022-08-12 11:28:49 -07:00
Cheng Su
7c7828f818
[Datasets] Improve size estimation of image folder data source (#27219)
This PR is to improve in-memory data size estimation of image folder data source. Before this PR, we use on-disk file size as estimation of in-memory data size of image folder data source. This can be inaccurate due to image compression and in-memory image resizing.

Given `size` and `mode` is set to be optional in https://github.com/ray-project/ray/pull/27295, so change this PR to tackle the simple case when `size` and `mode` are both provided.
* `size` and `mode` is provided: just calculate the in-memory size based on the dimensions, not need to read any image (this PR)
* `size` or `mode` is not provided: need sampling to determine the in-memory size (will do in another followup PR).

Here is example of estiamted size for our test image folder

```
>>> import ray
>>> from ray.data.datasource.image_folder_datasource import ImageFolderDatasource
>>> root = "example://image-folders/different-sizes"
>>> ds = ray.data.read_datasource(ImageFolderDatasource(), root=root, size=(64, 64), mode="RGB")
>>> ds.size_bytes()
40310
>>> ds.fully_executed().size_bytes()
37428
```

Without this PR:

```
>>> ds.size_bytes()
18978
```
2022-08-12 11:26:03 -07:00
matthewdeng
58495fe594
[data][docs] fix broken links (#27818) 2022-08-12 11:17:34 -07:00
Archit Kulkarni
6c45625d6d
[runtime env] [CI] Skip flaky test_runtime_env_working_dir_2 tests on mac (#27799) 2022-08-12 09:39:19 -07:00
Archit Kulkarni
518c74020c
[Serve] [Doc] Serve add API ref for Deployment.bind() and serve.build (#27811) 2022-08-12 09:38:58 -07:00
Simon Mo
bf9f0621b9
[Serve] Minor fix to replica shutdown (#27778) 2022-08-12 09:33:08 -07:00
Simon Mo
0badbb8b1e
[Serve][docs] Refresh http-guide (#27779)
- Moved most code snippet to doc_code
- Added section about DAGDriver
- Added section discussing when should you use each abstraction layer.
2022-08-12 11:06:36 -05:00
shrekris-anyscale
e15960ed7e
[Serve] [Docs] Update the "Monitoring Ray Serve" Page (#27777)
The "Monitoring Ray Serve" page explains how to inspect your Ray Serve applications. This change updates the page to remove outdated metrics that Serve no longer exposes and to upgrade code samples to use 2.0 APIs. It also improves the content's readability and organization.

Link to updated "Monitoring Ray Serve" page: https://ray--27777.org.readthedocs.build/en/27777/serve/monitoring.html
2022-08-12 11:05:31 -05:00
matthewdeng
75d13faa50
[serve] fix grammar check in test (#27819) 2022-08-12 09:02:31 -07:00
Eric Liang
52f7b89865
[docs] Editing pass on clusters docs, removing legacy material and fixing style issues (#27816) 2022-08-12 00:15:03 -07:00
Nikita Vemuri
87dd078e1e
fix external dashboard url if connecting to existing cluster (#27807)
Signed-off-by: Nikita Vemuri <nikitavemuri@gmail.com>
2022-08-11 17:56:24 -07:00
Jian Xiao
b1cad0a112
[Datasets] Use detached lifetime for stats actor (#25271)
The actor handle held at Ray client will become dangling if the Ray cluster is shutdown, and in such case if the user tries to get the actor again it will result in crash. This happened in a real user and blocked them from making progress.

This change makes the stats actor detached, and instead of keeping a handle, we access it via its name. This way we can make sure re-create this actor if the cluster gets restarted.

Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
2022-08-11 17:47:13 -07:00
Cade Daniel
b7a6a1294a
Fix linkcheck introduced by Ray Clusters doc changes (#27804)
Broken links introduced by #27756

Will defer to @ericl if he wants to merge this or fix it himself.

Signed-off-by: Cade Daniel <cade@anyscale.com>
2022-08-11 16:55:20 -07:00
Chris K. W
74f28f9270
[client] Fix ignore_reinit_error behavior in ray client (#26165)
Ray client currently errors on reinit even if ignore_reinit_error is set.
2022-08-11 14:56:54 -07:00
shrekris-anyscale
8a6d2db1d3
[Serve] Fix grammar in deployment logs (#27780) 2022-08-11 13:51:42 -07:00
Ricky Xu
5ea4747448
[Core][State Observability] Nightly release test for state API (#26610)
* Initial

* Correctness test skeleton

* Added limit for listing

* Updated grpc config

* no more waiting

* metrics

* Updated constant and add test

* renamed

* actors

* actors

* actors

* dada

* actor dead?

* Script

* correct test name

* limit

* Added timeout

* release test /2

* Merged

* format+doc

* wip

Signed-off-by: rickyyx <ricky@anyscale.com>

* revert packag-lock

Signed-off-by: rickyyx <rickyx@anyscale.com>

* wip

* results

Signed-off-by: rickyx <rickyx@anyscale.com>

Signed-off-by: rickyyx <rickyx@anyscale.com>
Signed-off-by: rickyyx <ricky@anyscale.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Co-authored-by: rickyyx <ricky@anyscale.com>
2022-08-11 07:01:01 -07:00
Artur Niederfahrenhorst
c855469845
[RLlib] pin gym-minigrid @ 1.0.3 (#27761) 2022-08-11 12:27:44 +02:00
matthewdeng
178b1e8a25
[data] enable test_split.py tests (#27150)
Signed-off-by: Matthew Deng <matt@anyscale.com>
2022-08-10 22:15:34 -07:00
Yi Cheng
c5952f2163
[serve] Add an internal os env to turn the head node pin off (#27763)
When the node id of the controller died, GSC will try to reschedule the controller to the same node. But GCS will only mark the node as failure after 120s when GCS restarts (or 30s if only raylet died).

This PR fixed it by unpin it to the head node. So as long as GCS is alive, it'll reschedule it immediately. But we can't turn it on by default, so we introduce an internal flag for this.
2022-08-10 18:13:54 -07:00
Jiajun Yao
27e38f81bd
Pin _StatsActor to the driver node (#27765)
Similar to what's done in #23397

This allows the actor to fate-share with the driver and tolerate worker node failures.
2022-08-10 17:55:06 -07:00
Balaji Veeramani
7da7dbe3fd
[AIR] Improve preprocessor documentation (#27215)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-08-10 17:13:22 -07:00
Cheng Su
853c859037
[Datasets] Better error message for partition filtering if no file found (#27353)
User raised issue in #26605, where the user found the error message was quite non-actionable when partition filtering input files, and no files with required extension being found.

Signed-off-by: Cheng Su <scnju13@gmail.com>
2022-08-09 22:42:20 -07:00
zcin
ea2a11080f
[serve][doc] Update Serve API in tutorials code (#27579) 2022-08-09 19:59:14 -07:00
Cheng Su
bc5d8d9176
[AIR] Replace references of to_tf with iter_tf_batches (#27672) 2022-08-09 16:00:02 -07:00
Jiajun Yao
f084546d41
Fix out-of-band deserialization of actor handle (#27700)
When we deserialize actor handle via pickle, we will register it with an outer object ref equaling to itself which is wrong. For out-of-band deserialization, there should be no outer object ref.

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-08-09 14:25:14 -07:00
Stephanie Wang
7d0fcd7ec6
[core] Allow reuse of cluster address if Ray is not running (#27666)
Signed-off-by: Stephanie Wang swang@cs.berkeley.edu

Cluster address is now written to a temp file. Previously we raised an error if ray start --head tried to reuse the old cluster address in the temp file, even if Ray was no longer running. This PR allows ray start --head to continue if it can't find any GCS process associated with the recorded cluster address.
Related issue number

Closes #27021.
2022-08-09 13:48:48 -07:00
Sihan Wang
22d1be5823
[Serve] Make serve.run to start serve with http on EveryNode mode (#27668)
Signed-off-by: Sihan Wang <sihanwang41@gmail.com>
2022-08-09 09:29:38 -07:00
Nikita Vemuri
0e74bc20b5
[core] Fix how protocol is removed for external ray dashboard URL (#27652)
* fix how protocol is removed for external dashboard url
2022-08-08 18:23:12 -07:00
matthewdeng
fbdec1add0
[air] remove rllib dependency from tensorflow_predictor (#27671) 2022-08-08 18:05:48 -07:00
Alan Guo
3a819fafb7
Force grpcio to be >= 1.42.0 for python 3.10 (#27269) 2022-08-08 17:37:18 -07:00
Clark Zinzow
3b151c581e
[Datasets] Delay expensive tensor extension type import until Parquet reading. (#27653)
The tensor extension import is a bit expensive since it will go through Arrow's and Pandas' extension type registration logic. This PR delays the tensor extension type import until Parquet reading, which is the only case in which we need to explicitly register the type.

I have confirmed that the Parquet reading in doc/source/data/doc_code/tensor.py passes with this change.
2022-08-08 17:06:25 -07:00
Yi Cheng
dac7bf17d9
[serve] Make serve agent not blocking when GCS is down. (#27526)
This PR fixed several issue which block serve agent when GCS is down. We need to make sure serve agent is always alive and can make sure the external requests can be sent to the agent and check the status.

- internal kv used in dashboard/agent blocks the agent. We use the async one instead
- serve controller use ray.nodes which is a blocking call and blocking forever. change to use gcs client with timeout
- agent use serve controller client which is a blocking call with max retries = -1. This blocks until controller is back.

To enable Serve HA, we also need to setup:

- RAY_gcs_server_request_timeout_seconds=5
- RAY_SERVE_KV_TIMEOUT_S=5

which we should set in KubeRay.
2022-08-08 16:29:42 -07:00
Balaji Veeramani
87ff765647
[AIR] Make Concatenator deterministic (#27575) 2022-08-08 15:49:46 -07:00
Yi Cheng
cadeccd9b7
[core] Fix job counter not working with storage namespace (#27627)
JobCounter is not working with storage namespace right now because the key is the same across namespaces.

This PR fixed it by just adding it there because this add the minimal changes which is safer.

A follow up PR is needed to cleanup redis storage in cpp.
2022-08-08 14:24:32 -07:00
Stephanie Wang
ccbae3325c
[core] Reconstruct manually freed objects (#27567)
Object freed by the manual and internal free call previously would not get reconstructed. This PR introduces the following semantics after a free call:

    If no failures occurs, and the object is needed by a downstream task, an ObjectFreedError will be thrown.
    If a failure occurs, causing a downstream task to be re-executed, the freed object will get reconstructed as usual.

Also fixes some incidental bugs:

    Don't crash on failure to contact local raylet during object recovery. This will produce a nicer error message because we will instead throw an application-level error when someone tries to get an object.
    Fix a circular lock dependency between task failure <> task dependency resolution.

Related issue number

Closes #27265.

Signed-off-by: Stephanie Wang <swang@cs.berkeley.edu>
2022-08-08 13:40:51 -07:00
Yi Cheng
1533976b82
[deflakey] test_error_handling.py in workflow (#27630)
Signed-off-by: Yi Cheng <chengyidna@gmail.com>

## Why are these changes needed?
This test timeout. Move it to large. 
```
WARNING: //python/ray/workflow:tests/test_error_handling: Test execution time (288.7s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="long" or size="large".
```
2022-08-08 13:38:37 -07:00
SangBin Cho
be64df6f5d
Fix a uncaught exception upon deallocation for actors (#27637)
As specified here, https://joekuan.wordpress.com/2015/06/30/python-3-__del__-method-and-imported-modules/, the del method doesn't guarantee that modules or function definitions are still referenced, and not GC'ed. That means if you access any "modules", "functions", or "global variables", they may have been garbage collected.

This means we should not access any modules, functions, or global variables inside del method. While it's something we should handle in the sooner future more holistically, this PR fixes the issue in the short term.

The problem was that all of ray actors are decorated by trace_helper.py to make it compatible to open telemetry (maybe we should make it optional). At this time __del__ method is also decorated. When __del__ is invoked, some of functions used within this tracing decorator can be accessed and may have been deallocated (in this case, the _is_tracing_enabled was deallocated). This fixes the issue by not decorating __del__ method from tracing.
2022-08-08 11:51:25 -07:00
Zyiqin-Miranda
b3f06d97b2
[autoscaler] Consolidate CloudWatch agent/dashboard/alarm support; Add unit tests for AWS autoscaler CloudWatch integration (#22070)
This PR mainly adds two improvements:

We have introduced three CloudWatch Config support in previous PRs: Agent, Dashboard and Alarm. In this PR, we generalize the logic of all three config types by using enum CloudwatchConfigType.
Adds unit tests to ensure the correctness of Ray autoscaler CloudWatch integration behavior.
2022-08-08 11:45:07 -07:00
Balaji Veeramani
5087511c46
[AIR] Change FeatureHasher input schema to expect token counts (#27523)
This makes FeatureHasher work more like sklearn's FeatureHasher.
2022-08-08 11:41:57 -07:00