Commit graph

6870 commits

Author SHA1 Message Date
Stephanie Wang
5cee6135b4
[core] Fix bugs in data locality (#24698) (#25092)
Redo for PR #24698:

This fixes two bugs in data locality:

    When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
    The locality policy did not take into account spilled locations.

Added C++ unit tests and Python tests for the above.

Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
2022-05-25 10:33:49 -07:00
Jiajun Yao
be93bb340d
Remove MLDataset (#25041)
MLDataset is replaced by Ray Dataset.
2022-05-25 09:33:29 -07:00
Kai Fricke
833e357a1f
[air] Do not use gzip for checkpoint dict conversion (#25177)
Gzipping binary data is inefficient and slows down data transfer significantly.
2022-05-25 17:11:00 +02:00
Kai Fricke
67cd984b92
[tune] Add annotations/set scope for Tune classes (#25077)
This PR adds API annotations or changes the scope of several Ray Tune library classes.
2022-05-25 15:21:28 +02:00
Yi Cheng
af895b3676
[gcs] Fix detached actor fail to restart when GCS restarted. (#25131)
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
2022-05-25 01:58:52 -07:00
Siyuan (Ryans) Zhuang
f67871c1f7
[workflow] Fast workflow indexing (#24767)
* workflow indexing

* simplify workflow storage API

* Only fix workflow status when updating the status.

* support status filter
2022-05-24 20:21:08 -07:00
mwtian
fa32cb7c40
Revert "[core] Resubscribe GCS in python when GCS restarts. (#24887)" (#25168)
This reverts commit 7cf4233858.
2022-05-24 18:13:40 -07:00
Stephanie Wang
c8765385cb
[datasets] Fix scheduling strategy propagation and stats collection in push-based shuffle (#25108)
This fixes two bugs in Datasets push-based shuffle:

    Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
    We were only reporting partial stats for the merge stage.

Related issue number

Issue 1 is necessary for performance at large-scale (#24480).
2022-05-24 18:01:40 -07:00
Balaji Veeramani
f032849aa2
[AIR] Don't ravel predictions in TensorflowPrediction.predict (#25138)
`TensorflowPredictor.predict` doesn't correctly produce logits. For more information, see #25137.
2022-05-24 17:38:48 -07:00
xwjiang2010
51dbd99a25
[air] Minor Doc updates (#25097)
Update a few docs and param names.
2022-05-24 17:15:03 -07:00
Philipp Moritz
323605d169
Support file:// for runtime_env working directories in jobs (#25062)
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.

Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-05-24 16:17:18 -07:00
Jiao
f27e85cd7d
[Serve][Deployment Graph][Perf] Add minimal executor DAGNode (#24754)
closes #24475

Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead.

Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds.

Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as 

1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles.
2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph.

Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR.

### Current 10 node chain with deployment graph `.bind()`
```
chain_length: 10, num_clients: 1
latency_mean_ms: 41.05, latency_std_ms: 15.18
throughput_mean_tps: 27.5, throughput_std_tps: 3.2
```

### Using raw deployment handle without dag overhead
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.39, latency_std_ms: 4.57
throughput_mean_tps: 51.9, throughput_std_tps: 1.04
```

### After this PR:
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.35, latency_std_ms: 0.87
throughput_mean_tps: 48.4, throughput_std_tps: 1.43
```
2022-05-24 13:23:38 -05:00
shrekris-anyscale
8b3451318c
[Serve] Update Serve status formatting and processing (#24839) 2022-05-24 11:07:41 -07:00
Kai Fricke
aaee8f09f1
[tune/train] Consolidate checkpoint manager 1: Common checkpoint manager class (#24771)
This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core.

- This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core
- This PR prepares Tune to move to a CheckpointStrategy object
- In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train)

The consolidation is split into three PRs:

1. This PR - adds a common checkpoint manager class.
2. #24772 - based on this PR, adds the integration for Ray Train
3. #24430 - based on #24772, adds the integration for Ray Tune
2022-05-24 19:07:12 +01:00
Jimmy Yao
9e3c88d727
[GCP] Update TPU Region (#25123)
changed, there is no `central_f1` now.
2022-05-24 09:38:06 -07:00
SangBin Cho
a7e759317b
[State Observability API] Error handling (#24413)
This improves error handling per https://docs.google.com/document/d/1IeEsJOiurg-zctOcBjY-tQVbsCmURFSnUCTkx_4a7Cw/edit#heading=h.pdzl9cil9e8z (the RPC part).

Semantics
If all queries to the source failed, raise a RayStateApiException.

If partial queries are failed, warnings.warn the partial failure when print_api_stats=True. It is true for CLI. It is false when it is used within Python API or json / yaml format is required.
2022-05-24 03:56:49 -07:00
Adrish Dey
e7e75b46e1
[tune] rolling back wandb service. replacing deprecated wandb methods (#25132)
Follow up: #24017

Briefly, wandb service is still in experimental stage, and is not ready to be released as an integration without extensive testing. Hence, we are interested in rolling back the update to the integration we made recently, until this feature is ready to be shipped.
2022-05-24 11:22:11 +01:00
Gagandeep Singh
5b9b4fa018
Ignore previous tasks before submitting ones via map and map_unordered (#23684) 2022-05-24 00:20:58 -07:00
Dmitri Gekhtman
806c187878
[autoscaler] Flush stdout and stdin when running commands. (#19473)
Flush command stdout/stderr before exiting CommandRunner.run, so that setup command output is less likely to get swallowed.
2022-05-23 23:17:30 -07:00
xwjiang2010
8703d5e9d0
[air preprocessor] Add limit to OHE. (#24893) 2022-05-23 22:37:15 -07:00
Yi Cheng
7cf4233858
[core] Resubscribe GCS in python when GCS restarts. (#24887)
This is a follow-up PRs of https://github.com/ray-project/ray/pull/24813 and https://github.com/ray-project/ray/pull/24628

Unlike the change in cpp layer, where the resubscription is done by GCS broadcast a request to raylet/core_worker and the client-side do the resubscription, in the python layer, we detect the failure in the client-side.

In case of a failure, the protocol is:

1. call subscribe
2. if timeout when doing resubscribe, throw an exception and this will crash the system. This is ok because when GCS has been down for a time longer than expected, we expect the ray cluster to be down.
3. continue to poll once subscribe ok.

However, there is an extreme case where things might be broken: the client might miss detecting a failure.

This could happen if the long-polling has been returned and the python layer is doing its own work. And before it sends another long-polling, GCS restarts and recovered. 

Here we are not going to take care of this case because:
1. usually GCS is going to take several seconds to be up and the python layer's work is simply pushing data into a queue (sync version). For the async version, it's only used in Dashboard which is not a critical component.
2. pubsub in python layer is not doing critical work: it handles logs/errors for ray job;
3. for the dashboard, it can just restart to fix the issue.


A known issue here is that we might miss logs in case of GCS failure due to the following reasons:

- py's pubsub is only doing best effort publishing. If it failed too many times, it'll skip publishing the message (lose messages from producer side)
- if message is pushed to GCS, but the worker hasn't done resubscription yet, the pushed message will be lost (lose messages from consumer side)

We think it's reasonable and valid behavior given that the logs are not defined to be a critical component and we'd like to simplify the design of pubsub in GCS.

Another things is `run_functions_on_all_workers`. We'll plan to stop using it within ray core and deprecate it in the longer term. But it won't cause a problem for the current cases because:

1. It's only set in driver and we don't support creating a new driver when GCS is down.
2. When GCS is down, we don't support starting new ray workers.

And `run_functions_on_all_workers` is only used when we initialize driver/workers.
2022-05-23 13:06:33 -07:00
Antoni Baum
36b1b4ce0c
Fix filelock in _delete_path (#25093) 2022-05-23 20:58:02 +01:00
Balaji Veeramani
50c31b8466
[Data] Add partitioning classes to Data API reference (#24203) 2022-05-23 09:34:41 -07:00
shrekris-anyscale
b9fb902a4b
Revert "[serve] Use soft constraint for placing controller on the head node (#24934)" (#25050)
This reverts commit 737d16328c.
2022-05-23 11:31:23 -05:00
Sven Mika
37799751df
[Serve + RLlib] Fix serve tutorial_rllib for Win. PyGame needs to be installed as of gym==0.23. (#25080) 2022-05-23 17:43:35 +02:00
Archit Kulkarni
a67c8a0739
[runtime_env] Add temporary URI reference to prevent URI deletion before job starts (#24719)
Packages are uploaded to the GCS for `runtime_env`.  These packages are garbage collected when their refcount becomes zero.

The problem is the reference doesn't get incremented until the job starts, which happens after the package is uploaded.  It's possible for the package's refcount to go to zero in between the upload and when the job starts, causing the package to be deleted before it's needed by the job.  It's likely the cause of https://github.com/ray-project/ray/issues/23423.

We can't just increment the refcount at the time of upload, because if the script is killed before the job is started (e.g. via Ctrl-C) then the reference will never be decremented and the package will never be deleted.

The solution in this PR is to increment the refcount at the time of upload, but automatically decrement after a configurable timeout (default 30s).  This should be enough time for the job to start.  When the job starts, it increments the refcount as usual and decrements it when the job finishes or is killed.

Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
2022-05-23 10:25:04 -05:00
mwtian
50d49a2d7a
[Core] use higher niceness for workers (#24928)
Looking at past failures of dataset_shuffle_push_based_random_shuffle_1tb and when running it on my own, I noticed that raylets are killed because GCS was not able to respond to it in time. It seems at the beginning of the run, there is a huge CPU spike which starved GCS out of CPU. With the same spirit of adjusting workers to higher OOM scores, we can give workers higher niceness so they yield CPU to GCS, Raylet and other user processes.

I ran dataset_shuffle_push_based_random_shuffle_1tb a few time which no longer sees raylet death because of GCS CPU starvation. But there are other issues making the test fail which I will continue to investigate.
2022-05-23 08:12:51 -07:00
Guyang Song
c6edfdd2a0
[script] expose options of xxx_port in 'ray start' command (#24919) 2022-05-23 17:18:09 +08:00
Eric Liang
d57cdd19ac
[tune] Fix stray extra log from runtime_env setup (#25071)
commit 40774ac219
Author: Qing Wang <kingchin1218@gmail.com>
Date:   Tue May 17 11:33:59 2022 +0800

    Minor changes for Java runtime env. (#24840)

Introduced an extra log message that spams stdout when running with Tune. Move this log line to debug and add an e2e test check.
2022-05-23 09:54:24 +01:00
Sven Mika
09886d7ab8
[RLlib] Upgrade gym 0.23 (#24171) 2022-05-23 08:18:44 +02:00
Jialing He
c03d0432f3
[core] Fix Object Manager owner address after AssignOwner (#25021)
When assigning an owner for an object (different from the current worker), such as:
```python
ray.put(vaule, _owner = ACTORHANDLE)
```
Object Manager holds the wrong owner's address and updates location info to the wrong worker, making `ray.get` slow. the current master will get Timeout in this new test case.
2022-05-23 13:34:40 +08:00
Eric Liang
55d039af32
Annotate datasources and add API annotation check script (#24999)
Why are these changes needed?
Add API stability annotations for datasource classes, and add a linter to check all data classes have appropriate annotations.
2022-05-21 15:05:07 -07:00
Kai Fricke
d57ba750f5
[docs/air] Move upload example to docs (#25022) 2022-05-21 12:16:33 -07:00
Yi Cheng
e3f854e34d
[flakey] Disable redis tests for test_plugin_timeout shortly. (#25045)
This test is not running well in Redis mode. Given that the other tests are ok, I'd like to only disable this one instead of revert the whole commit to making sure the other tests don't have regression.

`linux://python/ray/tests:test_runtime_env_plugin::test_plugin_timeout`
2022-05-20 17:31:46 -07:00
ZhuSenlin
bed660b085
[Core] Lazy subscribe to actor's state (#24600)
Now the status of subscribing to Actors in Actor Manager is eager mode, that is to say, when worker A passes List<ActorHandler> as an input parameter to another worker B, worker B will immediately subscribe to the status of all Actors in this list when constructing, even if worker B has not yet used these actors.

Assuming that a graph job has 1000 actors, and each actor has a List of the graph, then this job has nearly 100w subscription relationships. When the job goes offline, the 1000 actor processes will be killed, the redis-server will instantly receive the disconnect event from the 1000 actor processes, each event will trigger 1000 unsubscribexxx operations in the freeClient, causing the redis-server to get stuck.

We suggest to change this eager mode to lazy mode, and only initiate subscription when `SubmitActorTask`, which can reduce many unnecessary subscription relationships.

The microbenchmark  (Left is this PR,  Right is master branch)
![image](https://user-images.githubusercontent.com/2016670/168011321-b64b06a2-20bd-4b35-aa69-0b84e7f4c12e.png)
2022-05-20 15:35:48 -07:00
Chen Shen
8960afa69c
[Core][Python 3.10] fix get_module in the interactive mode. #25032
in python3.10, it fixed a bug that a interactively defined class was tagged with a wrong type during inspection; which now throws OSError. detailed pr python/cpython#27171

we need to handle this case properly in otherwise ray actor definition will throw in interactive mode. please refer to #25026 for repo.
2022-05-20 12:58:15 -07:00
Clark Zinzow
9ea5a8ec4b
Revert "Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets."" (#25031)
Fixes the check ingest utility to handle non-Pandas native batches.
2022-05-20 11:47:29 -07:00
Clark Zinzow
b52f225a4e
[Datasets] Skip flaky pipelining memory release test (#25009)
This pipelining memory release test is flaky; it was skipped in this Polars PR, which was then reverted.
2022-05-20 11:14:22 -07:00
mwtian
916c6796da
Revert "[core] Fix bugs in data locality (#24698)" (#25035)
This reverts commit eaec96d175.
2022-05-20 10:57:25 -07:00
Antoni Baum
a357b7cf95
[tune] File lock for syncing (#24978)
Adds file locking to prevent parallel file system operations to Tune/AIR syncing functions.

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-05-20 17:11:14 +01:00
Kai Fricke
f215c8c988
[tune] Move wandb logging directory into trial logdir (#25020)
Weights and biases creates a wandb directory to collect intermediate logs and artifacts before uploading them. This directory should be in the respective trial directories. This also means we can re-enable auto resuming.
2022-05-20 17:02:42 +01:00
Kai Fricke
fbfb134b8c
Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets. (#24812)" (#25017)
This reverts commit 841f7c81ff.

Reverts #24812

Broke e.g. ML tests: https://buildkite.com/ray-project/ray-builders-branch/builds/7667#55e7473e-f6a8-4d72-a875-cd68acf8b0c4
2022-05-20 15:37:40 +01:00
Kai Fricke
e76efffec6
[air/docs] Move RL examples to docs (#24962)
Following #24959, this PR moves the RL examples (online/offline/serving) into the Ray ML docs. It also splits the online and offline parts.
2022-05-20 14:55:01 +01:00
Jim Thompson
a2c8fe2101
[tune] FIX: Failure in create_scheduler() with pb2 scheduler (#24897)
When `create_scheduler("pb2", ....)` is run a `TuneError` exception is raised.  See referenced issue below for details.

In addition to the fix, introduced a new test (`ray/tune/tests/test_api.py::ShimCreationTest.testCreateAllSchedulers`) to confirm that `tune.create_scheduler()` will work with all documented schedulers.  

Note: `tesCreateAllTestSchedulers` is a superset of what is covered in `testCreateScheduer`.  It may be reasonable to retire the later test.
2022-05-20 12:47:38 +01:00
Guyang Song
99d25d4d4e
[Doc] Fix ray core doc (#25006) 2022-05-20 14:51:59 +08:00
Clark Zinzow
841f7c81ff
[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets. (#24812)
This PR makes several improvements to the Datasets' tensor story. See the issues for each item for more details.

- Automatically infer tensor blocks (single-column tables representing a single tensor) when returning NumPy ndarrays from map_batches(), map(), and flat_map().
- Automatically infer tensor columns when building tabular blocks in general.
- Fixes shuffling and sorting for tensor columns

This should improve the UX/efficiency of the following:

- Working with pure-tensor datasets in general.
- Mapping tensor UDFs over pure-tensor, a better foundation for tensor-native preprocessing for end-users and AIR.
2022-05-19 22:40:04 -07:00
Yi Cheng
8ec558dcb9
[core] Reenable GCS test with redis as backend. (#23506)
Since ray supports Redis as a storage backend, we should ensure the code path with Redis as storage is still being covered e2e.

The tests don't run for a while after we switch to memory mode by default. This PR tries to fix this and make it run with every commit.

In the future, if we support more and more storage backends, this should be revised to be more efficient and selective. But now I think the cost should be ok.

This PR is part of GCS HA testing-related work.
2022-05-19 21:46:55 -07:00
Jian Xiao
401db466bb
Revamp the Datasets API docstrings (#24949) 2022-05-19 20:26:39 -07:00
Guyang Song
eb2692cb32
[runtime env] runtime env inheritance refactor (#24538)
* [runtime env] runtime env inheritance refactor (#22244)

Runtime Environments is already GA in Ray 1.6.0. The latest doc is [here](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#runtime-environments). And now, we already supported a [inheritance](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#inheritance) behavior as follows (copied from the doc):
- The runtime_env["env_vars"] field will be merged with the runtime_env["env_vars"] field of the parent. This allows for environment variables set in the parent’s runtime environment to be automatically propagated to the child, even if new environment variables are set in the child’s runtime environment.
- Every other field in the runtime_env will be overridden by the child, not merged. For example, if runtime_env["py_modules"] is specified, it will replace the runtime_env["py_modules"] field of the parent.

We think this runtime env merging logic is so complex and confusing to users because users can't know the final runtime env before the jobs are run.

Current PR tries to do a refactor and change the behavior of Runtime Environments inheritance. Here is the new behavior:
- **If there is no runtime env option when we create actor, inherit the parent runtime env.**
- **Otherwise, use the optional runtime env directly and don't do the merging.**

Add a new API named `ray.runtime_env.get_current_runtime_env()` to get the parent runtime env and modify this dict by yourself. Like:
```Actor.options(runtime_env=ray.runtime_env.get_current_runtime_env().update({"X": "Y"}))```
This new API also can be used in ray client.
2022-05-20 10:53:54 +08:00
SangBin Cho
d89c8aa9f9
[Core] Add more accurate worker exit (#24468)
This PR adds precise reason details regarding worker failures. All information is available either by 
- ray list workers
- exceptions from actor failures.

Here's an example when the actor is killed by a SIGKILL (e.g., OOM killer)
```
RayActorError: The actor died unexpectedly before finishing this task.
	class_name: G
	actor_id: e818d2f0521a334daf03540701000000
	pid: 61251
	namespace: 674a49b2-5b9b-4fcc-b6e1-5a1d4b9400d2
	ip: 127.0.0.1
The actor is dead because its worker process has died. Worker exit type: UNEXPECTED_SYSTEM_EXIT Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
```

## Design
Worker failures are reported by Raylet from 2 paths.
(1) When the core worker calls `Disconnect`.
(2) When the worker is unexpectedly killed, the socket is closed, raylet reports the worker failures.

The PR ensures all worker failures are reported through Disconnect while it includes more detailed information to its metadata.

## Exit types
Previously, the worker exit types are arbitrary and not correctly categorized. This PR reduces the number of worker exit types while it includes details of each exit type so that users can easily figure out the root cause of worker crashes. 

### Status quo
- SYSTEM ERROR EXIT
    - Failure from the connection (core worker dead)
    - Unexpected exception or exit with exit_code !=0 on core worker
    - Direct call failure
- INTENDED EXIT
    - Shutdown driver
    - Exit_actor
    - exit(0)
    - Actor kill request
    - Task cancel request
- UNUSED_RESOURCE_REMOVED
     - Upon GCS restart, it kills bundles that are not registered to GCS to synchronize the state
- PG_REMOVED
    - When pg is removed, all workers fate share
- CREATION_TASK (INIT ERROR)
    - When actor init has an error
- IDLE
    - When worker is idle and num workers > soft limit (by default num cpus)
- NODE DIED
    - Only can detect when the node of the owner is dead (need improvement)

### New proposal
Remove unnecessary states and add “details” field. We can categorize failures by 4 types

- UNEXPECTED_SYSTEM_ERROR_EXIT
     - When the worker is crashed unexpectedly
    - Failure from the connection (core worker dead)
    - Unexpected exception or exit with exit_code !=0 on core worker
    - Node died
    - Direct call failure
- INTENDED_USER_EXIT. 
    - When the worker is requested to be killed by users. No workflow required. Just correctly store the state.
    - Shutdown driver
    - Exit_actor
    - exit(0)
    - Actor kill request
    - Task cancel request
- INTENDED_SYSTEM_EXIT
    - When the worker is requested to be killed by system (without explicit user request)
    - Unused resource removed
    - Pg removed
    - Idle
- ACTOR_INIT_FAILURE (CREATION_TASK_FAILED)
     - When the actor init is failed, we fate share the process with the actor. 
     - Actor init failed

## Limitation (Follow up)
Worker failures are not reported under following circumstances
- Worker is failed before it registers its information to GCS (it is usually from critical system bug, and extremely uncommon).
- Node is failed. In this case, we should track Node ID -> Worker ID mapping at GCS and when the node is failed, we should record worker metadata. 

I will create issues to track these problems.
2022-05-19 19:48:52 -07:00