Commit graph

11925 commits

Author SHA1 Message Date
Jiao
d7e77fc9c5
[DAG] Serve Deployment Graph documentation and tutorial. (#23512) 2022-03-30 17:32:16 -07:00
Yi Cheng
31483a003a
[syncer] skip ray_syncer_test on windows temporarily (#23610)
ray_syncer_test is flaky on windows. It's not so easy to investigate what's happening there. The test timeout somehow.
We disable it for short time.
2022-03-30 17:29:08 -07:00
Yi Cheng
d01f947ff1
[gcs] Make core worker test compilable. (#23608)
It seems like core worker test is not running and it breaks the build. This PR fixed this.
2022-03-30 17:26:38 -07:00
Chen Shen
944e8e1053
Revert "[Python Worker] load actor dependency without importer thread (#23383)" (#23601)
This reverts commit d1ef498638.
2022-03-30 15:45:00 -07:00
Chen Shen
3e80da7e9f
[ci/release] long running / change failed test to sdk (#23602)
close #23592. Talking with @krfricke and he suggested we move to use sdk for those long running tasks.
2022-03-30 12:57:21 -07:00
Kai Fricke
3b3f271c3c
[tune] Fix tensorflow distributed trainable docstring (#23590)
The current docstring was copied from horovod and still refers to it.
2022-03-30 11:36:45 -07:00
Jiajun Yao
2959294f02
[CI] Filter release tests by attr regex (#23485)
Support filtering tests by test attr regex filters. Multiple filters can be specified with one line for each filter. The format is attr:regex (e.g. team:serve)
2022-03-30 09:41:18 -07:00
jon-chuang
54ddcedd1a
[Core] Chore: move test to right dir #23096 2022-03-30 09:29:38 -07:00
Kai Fricke
e8abffb017
[tune/release] Improve Tune cloud release tests for durable storage (#23277)
This PR addresses recent failures in the tune cloud tests.

In particular, this PR changes the following:

    The trial runner will now wait for potential previous syncs to finish before syncing once more if force=True is supplied. This is to make sure that the final experiment checkpoints exist in the most recent version on remote storage. This likely fixes some flakiness in the tests.
    We switched to new cloud buckets that don't interfere with other tests (and are less likely to be garbage collected)
    We're now using dated subdirectories in the cloud buckets so that we don't interfere if two tests are run in parallel. Objects are cleaned up afterwards. The buckets are configured to remove objects after 30 days.
    Lastly, we fix an issue in the cloud tests where the RELEASE_TEST_OUTPUT file was unavailable when run in Ray client mode (as e.g. in kubernetes).

Local release test runs succeeded.

https://buildkite.com/ray-project/release-tests-branch/builds/189
https://buildkite.com/ray-project/release-tests-branch/builds/191
2022-03-30 09:28:33 -07:00
Kai Fricke
b0fc631dea
[docs/tune] Fix PTL multi GPU link (#23589)
Broken in current docs
2022-03-30 09:24:48 -07:00
Yi Cheng
781c46ae44
[scheduling][5] Refactor resource syncer. (#23270)
## Why are these changes needed?

This PR refactor the resource syncer to decouple it from GCS and raylet. GCS and raylet will use the same module to sync data. The integration will happen in the next PR.

There are several new introduced components:

* RaySyncer: the place where remote and local information sits. It's a coordinator layer.
* NodeState: keeps track of the local status, similar to NodeSyncConnection.
* NodeSyncConnection: keeps track of the sending and receiving information and make sure not sending the information the remote node knows.

The core protocol is that each node will send {what it has} - {what the target has} to the target.
For example, think about node A <-> B. A will send all A has exclude what B has to B.

Whenever when there is new information (from NodeState or NodeSyncConnection), it'll be passed to RaySyncer broadcast message to broadcast. 

NodeSyncConnection is for the communication layer. It has two implementations Client and Server:

* Server => Client: client will send a long-polling request and server will response every 100ms if there is data to be sent.
* Client => Server: client will check every 100ms to see whether there is new data to be sent. If there is, just use RPC call to send the data.

Here is one example:

```mermaid
flowchart LR;
    A-->B;
    B-->C;
    B-->D;
```

It means A initialize the connection to B and B initialize the connections to C and D

Now C generate a message M:

1. [C] RaySyncer check whether there is new message generated in C and get M
2. [C] RaySyncer will push M to NodeSyncConnection in local component (B)
3. [C] ServerSyncConnection will wait until B send a long polling and send the data to B
4. [B] B received the message from C and push it to local sync connection (C, A, D)
5. [B] ClientSyncConnection of C will not push it to its local queue since it's received by this channel.
6. [B] ClientSyncConnection of D will send this message to D
7. [B] ServerSyncConnection of A will be used to send this message to A (long-polling here)
8. [B] B will update NodeState (local component) with this message M
9. [D] D's pipelines is similar to 5) (with ServerSyncConnection) and 8)
10. [A] A's pipeline is similar to 5) and 8)
2022-03-29 23:52:39 -07:00
Eric Liang
5aead0bb91
Warn if the dataset's parallelism is limited by the number of files (#23573)
A common user confusion is that their dataset parallelism is limited by the number of files. Add a warning if the available parallelism is much less than the specified parallelism, and tell the user to repartition() in that case.
2022-03-29 18:54:54 -07:00
xwjiang2010
6443f3da84
[air] Add horovod trainer (#23437) 2022-03-29 18:12:32 -07:00
Matti Picus
e58b784ac7
WINDOWS: fix pip activate command (#23556)
Continuation of #22449 

Fix pip activation so something like this will not crash
```
ray.init(runtime_env={"pip": ["toolz", "requests"]})
```

Also enable test that hit this code path.
2022-03-29 17:51:20 -07:00
Amog Kamsetty
0b8c21922b
[Train] Improvements to fault tolerance (#22511)
Various improvements to Ray Train fault tolerance.

Add more log statements for better debugging of Ray Train failure handling.
Fixes [Bug] [Train] Cannot reproduce fault-tolerance, script hangs upon any node shutdown #22349.
Simplifies fault tolerance by removing backend specific handle_failure. If any workers have failed, all workers will be restarted and training will continue from the last checkpoint.
Also adds a test for fault tolerance with an actual torch example. When testing locally, the test hangs before the fix, but passes after.
2022-03-29 15:36:46 -07:00
Stephanie Wang
da7901f3fc
[core] Filter out self node from the list of object locations during pull (#23539)
Running Datasets shuffle with 1TB data and 2k partitions sometimes times out due to a failed object fetch. This happens because the object directory notifies the PullManager that the object is already on the local node, even though it isn't. This seems to be a bug in the object directory.

To work around this on the PullManager side, this PR filters out the current node from the list of locations provided by the object directory. @jjyao confirmed that this fixes the issue for Datasets shuffle.
2022-03-29 15:18:14 -07:00
Kai Fricke
922367d158
[ci/release] Fix smoke test compute templates (#23561)
The smoke test definitions of a few tests were faulty for compute template override.

Core tests @rkooo567: https://buildkite.com/ray-project/release-tests-branch/builds/294
2022-03-29 13:48:09 -07:00
Gagandeep Singh
3856011267
[Serve] [Test] Bytecode check to verify imported function correctness in test_pipeline_driver::test_loading_check (#23552) 2022-03-29 13:18:25 -07:00
Linsong Chu
2a6cbc5202
[workflow]align the behavior of workflow's options() with remote function's options() (#23469)
The current behavior of workflow's `.options()` is to **completely rewrite all the options** rather than **update options**, this is less intuitive and inconsistent with the behavior of `.options()` in remote functions.

For example:
```
# Remote Function
@ray.remote(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **updated** num_cpus while **the rest options are untouched**, i.e. max_retires is still 2. This is the expected behavior and more intuitive.

```
# Workflow Step
@workflow.step(num_cpus=2, max_retries=2)
f.options(num_cpus=1)
```
`options()` here **completely drop all existing options** and only set num_cpus, i.e. previous value of max_retires (2) is dropped and reverted to default (3).  This will also drop other fields like `name` and `metadata` if name and metadata are given in the decorator but not in the options().
2022-03-29 12:35:04 -07:00
Yi Cheng
61c9186b59
[2][cleanup][gcs] Cleanup GCS client options. (#23519)
This PR cleanup GCS client options.
2022-03-29 12:01:58 -07:00
Simon Mo
cb1919b8d0
[Doc][Serve] Add minimal docs for model wrappers and http adapters (#23536) 2022-03-29 11:33:14 -07:00
Kai Fricke
afd287eb93
[ci] linkcheck should soft fail (#23559)
Linkcheck failures should not break the build.
2022-03-29 10:57:03 -07:00
Philipp Moritz
005ea36850
[linkcheck] Remove flaky url (#23549) 2022-03-29 08:36:54 -07:00
Artur Niederfahrenhorst
9a64bd4e9b
[RLlib] Simple-Q uses training iteration fn (instead of execution_plan); ReplayBuffer API for Simple-Q (#22842) 2022-03-29 14:44:40 +02:00
Jun Gong
a7e5aa8c6a
[RLlib] Delete some unused confusing logics. (#23513) 2022-03-29 13:45:13 +02:00
Hao Chen
b7d32df8b0
Refactor scheduler data structures (#22854)
This is the first PR to refactor scheduler data structures (See #22850).

Major changes:
- Hid the implementation details in the `ResourceRequest` and `TaskResourceInstnaces` classes, which expose public methods such as algebra operators and comparison operators. 
- Hid the differences between "predefined" and "custom" resources inside these 2 classes. Call sites can simply use the resource ID to access the resource, no matter it is predefined or custom.
- The predefined_resources vector now always has the full length. So no more "resize"s are needed. 
- Removed the `ResourceCapacity` class. Now "total" and "available" resources are stored in separate fields in "NodeResources". 
- Moved helper functions for FixedPoint vectors from "cluster_resource_data.h" to "fixed_point.h"
- "ResourceID" now has static methods to get the resource ids of predefined resources, e.g. "ResourceID::CPU()". 
- Encapsulated unit-instance resource logic to "ResourceID"

Other planned changes that are not included in this PR:
- Rename ResourceRequest to ResourceSet, and move it to its own file.
- Remove the predefined vectors and always use maps.

Co-authored-by: Chong-Li <lc300133@antgroup.com>
2022-03-29 19:44:59 +08:00
Artur Niederfahrenhorst
32ad6c6ef1
[RLlib] Replay Buffer capacity check (#23523) 2022-03-29 12:06:27 +02:00
Eric Liang
990b0ec934
Move linkcheck into a separate CI build
Why are these changes needed?
Linkcheck is inherently flaky, so separate it from the normal LINT build which is never flaky. This also separates the verbose linkcheck logs, making it easier to read the LINT output.
2022-03-29 01:08:53 -07:00
Matti Picus
0cb2847e2f
WINDOWS: make default node startup timeout longer (#23462)
Timeouts when starting nodes rank high on https://flakey-tests.ray.io/#owner=core. The default timeout should be longer on windows. For instance, [here](https://buildkite.com/ray-project/ray-builders-branch/builds/6720#d4cf497e-13d5-4b6b-9354-2dd8828bd0e7/2835-3259) is one such error.
2022-03-29 01:01:43 -07:00
Matti Picus
84026ef55d
WINDOWS: make timeout longer in test_metrics (#23461)
`test_metrics` scales quite high on https://flakey-tests.ray.io/#owner=core. This test is often hitting the timeout limit. Making it larger should help the test pass.
2022-03-29 00:59:13 -07:00
Matti Picus
77c4c1e48e
WINDOWS: enable and fix failures in test_runtime_env_complicated (#22449) 2022-03-29 00:56:42 -07:00
Chen Shen
44114c8422
[CI] pin click version to fix broken test. #23544 2022-03-29 00:44:48 -07:00
Chen Shen
1d0fe1e1c3
[doc/linter] fix broken deepmind link #23542 2022-03-28 22:35:53 -07:00
Yi Cheng
7de751dbab
[1][core][cleanup] remove enable gcs bootstrap in cpp. (#23518)
This PR remove enable_gcs_bootstrap flag in cpp.
2022-03-28 21:37:24 -07:00
Jian Xiao
cc0db8b92a
Fix Dataset zip for pandas (#23532)
Dataset zip cannot work for Pandas.
2022-03-28 17:58:31 -07:00
Kai Fricke
62414525f9
[tune] Optuna should ignore additional results after trial termination (#23495)
In rare cases (#19274) (and possibly old versions of Ray), buffered results can lead to calling on_trial_complete multiple times with the same trial ID. In these cases, Optuna should gracefully handle this case and discard the results.
2022-03-28 20:07:41 +01:00
Kai Fricke
262d6121bb
[rllib] Fix error messages and example for dataset writer (#23419)
Currently the error message and example refer to a field type that is actually format.
2022-03-28 19:53:12 +01:00
Chen Shen
51bdefc2c8
[scheduler][monitoring] dump detailed spilling metrics (#23321)
Dump the detailed spilling metrics in scheduler.
2022-03-28 10:49:04 -07:00
shrekris-anyscale
aae144d7f9
[serve] Make Serve CLI and serve.build() non-public (#23504)
This change makes `serve.build()` non-public and hides the following Serve CLI commands:
* `deploy`
* `config`
* `delete`
* `build`
2022-03-28 10:40:57 -07:00
Kai Fricke
1465eaa306
[tune] Use new Checkpoint interface internally (#22801)
Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods.

With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
2022-03-28 18:33:40 +01:00
Chen Shen
c3e04ab275
[nighly-test] try out spot instances for chaos test #23507 2022-03-27 20:10:21 -07:00
mwtian
d1ef498638
[Python Worker] load actor dependency without importer thread (#23383)
Import actor dependency when not found, so actor dependencies can be imported without the importer thread.

Remaining blockers to remove importer thread are to support running a function on all workers `run_function_on_all_workers()`, and raising a warning when the same function / class is exported too many times.
2022-03-27 15:09:08 -07:00
Siyuan (Ryans) Zhuang
6b1b25168f
[workflow][doc] Doc for workflow checkpointing (#23510) 2022-03-27 12:18:14 -07:00
shrekris-anyscale
65d72dbd91
[serve] Make serve.shutdown() shut down remote Serve applications (#23476) 2022-03-25 18:27:34 -05:00
Amog Kamsetty
7fd7efc8d9
[AIR] Do not deepcopy RunConfig (#23499)
RunConfig is not a tunable hyperparameter, so we do not need to deep copy it when merging parameters with Ray Tune's param_space.
2022-03-25 13:12:17 -07:00
Edward Oakes
cf7b4e65c2
[serve] Implement serve.build (#23232)
The Serve REST API relies on YAML config files to specify and deploy deployments. This change introduces `serve.build()` and `serve build`, which translate Pipelines to YAML files.

Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
2022-03-25 13:36:59 -05:00
shrekris-anyscale
be216a0e8c
[serve] Raise error in test_local_store_recovery (#23444) 2022-03-25 13:36:51 -05:00
shrekris-anyscale
891301ff54
[serve] [docs] Add tip about serve status (#23481)
The `serve status` command allows users to get their deployments' status info through the CLI. This change adds a tip to the health-checking documentation to inform users about `serve status`.
2022-03-25 13:36:15 -05:00
dependabot[bot]
e69f7f33ee
[tune](deps): Bump optuna in /python/requirements/ml (#19669)
Bumps [optuna](https://github.com/optuna/optuna) from 2.9.1 to 2.10.0.
- [Release notes](https://github.com/optuna/optuna/releases)
- [Commits](https://github.com/optuna/optuna/compare/v2.9.1...v2.10.0)

---
updated-dependencies:
- dependency-name: optuna
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-03-25 17:58:22 +00:00
Sven Mika
7cb86acce2
[RLlib] trainer_template.py: hard deprecation (error when used). (#23488) 2022-03-25 18:25:51 +01:00