Commit graph

13328 commits

Author SHA1 Message Date
Tao Wang
5a0ca8da10
Revert "[Test]Disable java call cpp actor case for now (#26288)" (#26462)
The hanging is caused by hiding symbols(see https://github.com/ray-project/ray/issues/26435), let's enable this test again.
2022-07-13 10:42:48 +08:00
Jiajun Yao
75cdbc4d5c
Disable stack trace logging tests for windows (#26488)
Getting stack trace doesn't work for Windows yet.
2022-07-12 18:54:09 -07:00
Chen Shen
6d6bf20be9
[Core][Data] Fix resend protocol (#26349)
When Ray is under memory pressure, the pull manager might cancel ongoing pull request and retry it later. There is a race condition that a pull request is initiated and canceled, and the pull request for the same object is retried by pull manager shortly. When this happens, the pusher (where the object is being pulled) ignores the second pull request if it's still sending the object initiated by the first pull request; instead it will continue sending the remaining chunks. This leads to the puller receiving incomplete data chunks (as some chunks has already being received and then canceled), and the puller has to wait for 10 seconds timeout and retry the pull request.

To fix the problem we simply always resent all chunks when a pull request is received. Since we always send chunks in order, we implement the resend logic by simply reset the remaining number of chunks to send; and treat the chunks as ring buffer.
2022-07-12 18:46:02 -07:00
Clark Zinzow
12ea100527
Revert "Object GC for block splitting inside the dataset splitting (#26196)" (#26495)
This reverts commit 45ba0e3cac.

Failures in the Train GPU job started popping up involving lost references around when this PR was merged; there was an ongoing failure that was reverted that overlaps this PR, but this PR is the most likely culprit for this particular lost reference issue, so we should try reverting the PR.

- Flakey test tracker: https://flakey-tests.ray.io/
- Example failure: https://buildkite.com/ray-project/ray-builders-branch/builds/8585#0181f423-0fe2-42b5-9dd8-47d2c7f9efa7
2022-07-12 18:44:51 -07:00
brucez-anyscale
57258335bd
[Serve] Fix test_cli flakiness (#26471) 2022-07-12 17:57:08 -07:00
Amog Kamsetty
e6c04031fd
Revert "[Train] Add support for handling multiple batch data types for prepare_data_loader (#26386)" (#26483)
This reverts commit 36229d1234.
2022-07-12 17:18:46 -07:00
truelegion47
980a59477d
[Serve] [AIR] Adding reconfigure method to model deployment (#26026) 2022-07-12 17:06:33 -07:00
Pamphile Roy
53ecc28f9f
[docs] Install ray from conda-forge instead of PyPi when using conda (#25296) 2022-07-12 16:59:44 -07:00
Alan Guo
7ad3a247bf
[Dashboard] [Frontend] Add workers to the main node tab in the New Dashboard UI (#26274)
The old dashboard UI was much easier at seeing all the work across all workers because workers were shown along side nodes in the main nodes page. This change brings the same functionality to the new Dashboard UI.

Some changes in this PR:

Factor out the NodeRow into its own component and into its own file.
Introduce WorkerRow which shows information about a worker
Updates the heading of the table column because the column will show different data depending on if its a node row or a worker row.
Makes sure we're rounding percentages to a single decimal place.
Logs button for worker row will go to the logs page and filter out just the log files related to that worker.
Update the api for fetching nodes into fetching nodes + workers.
fix bug where object store memory was not showing the total size but instead the remaining size
2022-07-12 16:28:08 -07:00
Dmitri Gekhtman
0c1b6df368
Fix redis dependency (#26459)
Fix the specification of the Redis dependency for the Ray image.
2022-07-12 16:07:09 -07:00
Eric Liang
4c04c8d92c
[doc] Rename toc entry for libraries back to "Ray Libraries" (#26485) 2022-07-12 14:23:36 -07:00
Jiajun Yao
53d878804a
[Core] Set c++ terminate handler to print stack trace (#26444) 2022-07-12 13:54:20 -07:00
Jian Xiao
45ba0e3cac
Object GC for block splitting inside the dataset splitting (#26196)
The pipeline will spill objects when splitting the dataset into multiple equal parts.

Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
2022-07-12 11:34:52 -07:00
Philipp Moritz
b155bc4a54
Add ray/widgets/templates/ files to wheel (fix #26452) (#26457)
Add the html template files in `ray/widgets/templates/` to the wheel to make sure the Jupyter widget that is displayed in `ray.init()` works for the Ray wheels.
2022-07-12 11:23:57 -07:00
Rohan Potdar
09ce4711fd
[RLlib]: Move OPE to evaluation config (#25911) 2022-07-12 11:04:34 -07:00
xwjiang2010
03671c961e
[CI] run air related doc/example tests as part of pre-submit CI. (#26466)
Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
2022-07-12 18:30:37 +01:00
Kai Fricke
ae7e30ddc8
[air/lightgbm] Hotfix lightgbm predictor for categoricals (#26467)
#26442 didn't trigger doc tests (fixed with #26466). The PR broke lightgbm prediction with categorical variables - this PR fixes this.

In a follow-up we should specifically test prediction with categorical variables.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-12 18:19:58 +01:00
Vishnu Deva
36229d1234
[Train] Add support for handling multiple batch data types for prepare_data_loader (#26386)
When working with Ray Train, using the ray.train.torch.prepare_data_loader method with a dataset that returns a dictionary instead of a tuple from its __getitem__ method causes issues.

Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2022-07-12 10:16:09 -07:00
Antoni Baum
8bb67427c1
[AIR] Discard returns of train loops in Trainers (#26448)
Discards returns of user defined train loop functions to prevent deser issues with eg. torch models. Those returns are not used anywhere in AIR, so there is no loss of functionality.
2022-07-12 10:14:06 -07:00
Guyang Song
781c2a7834
[runtime env] plugin refactor[3/n]: support strong type by @dataclass (#26296) 2022-07-13 00:40:42 +08:00
Antoni Baum
b3878e26d7
[AIR] Fix ResourceChangingScheduler not working with AIR (#26307)
This PR ensures that the new trial resources set by `ResourceChangingScheduler` are respected by the train loop logic by modifying the scaling config to match. Previously, even though trials had their resources updated, the scaling config was not modified which lead to eg. new workers not being spawned in the `DataParallelTrainer` even though resources were available.

In order to accomplish this, `ScalingConfigDataClass` is updated to allow equality comparisons with other `ScalingConfigDataClass`es (using the underlying PGF) and to create a `ScalingConfigDataClass` from a PGF.

Please note that this is an internal only change intended to actually make `ResourceChangingScheduler` work. In the future, `ResourceChangingScheduler` should be updated to operate on `ScalingConfigDataClass`es instead of PGFs as it is now. That will require a deprecation cycle.
2022-07-12 17:36:42 +01:00
Sihan Wang
f5c5215fe6
[Serve] Add deprecated warnings (#26374) 2022-07-12 09:35:16 -07:00
Guyang Song
22dfd1f1f3
Revert "Revert "[runtime env] plugin refactor[2/n]: support json sche… (#26255) 2022-07-12 23:58:18 +08:00
Kai Fricke
adfdc26dd3
[air] Test predictors with all data batch types (#26442)
This adds a parameterized `test_predict` test for all predictors to test prediction with all data batch types.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-12 13:58:36 +01:00
Tao Wang
f4a602a290
[core][c++ worker]store log dir of driver in internal config (#26354)
Move update logic of session dir and log dir into config_internal, make it more tense, and consistent with Python/Java.
Store log dir of driver into config_internal, so it can be used later.
2022-07-12 18:44:04 +08:00
Tao Wang
bb6c805bd7
[Java worker][Cpp worker]Support Java call Cpp Task (#26182) 2022-07-12 17:49:22 +08:00
Dmitri Gekhtman
8f8f036957
[autoscaler][kuberay] Deflake KubeRay autoscaling test (#26411)
Improves stability of KubeRay autoscaling test.
2022-07-12 00:56:36 -07:00
Archit Kulkarni
0914e5602d
[Serve] [runtime_env] [CI] Skip flaky Ray Client test (#26400) 2022-07-12 14:39:48 +08:00
Richard Liaw
92efc85b3b
[air/docs] checkpoints (#25901) 2022-07-11 20:40:23 -07:00
Richard Liaw
1abe908c22
[air/docs] improve consistency of getting started (#26247) 2022-07-11 20:16:37 -07:00
Larry
009c65ecb8
fix cpp hide symbols cause ut failure and compile error on mac (#26438) 2022-07-12 11:00:17 +08:00
Richard Liaw
191921f4ec
[docs] Fix pytest and add stacklevel (#26340) 2022-07-11 19:43:37 -07:00
Tao Wang
1de0d35cda
[core][c++ worker]Add namespace support for c++ worker (#26327) 2022-07-12 09:58:26 +08:00
kourosh hakhamaneshi
be6e4c644f
[RLlib] Feature importance evaluation for offline RL (#26412) 2022-07-11 18:12:50 -07:00
Dmitri Gekhtman
aa182b1941
Add Redis dependency to ray-deps 2022-07-11 17:56:02 -07:00
Antoni Baum
65ea710e30
[Docs] Update Train user guide to use the new APIs (#26091) 2022-07-11 15:10:10 -07:00
Chen Shen
2c5c0f6cee
[Core] ensure uniqueness in spilled file name (#26420)
There are cases that same object is being spilled twice due to failures. This made two spill worker overwrites the same file and causing corruption. The fix is as simple as ensure the uniqueness of the file.

close #26395
2022-07-11 14:39:44 -07:00
Jiao
d95dc2f2e5
[AIR][GPU Batch Prediction] Add basic support for GPU batch prediction (#26251)
This PR adds GPU support for pytorch and tensorflow predictor, as well as automatic setting `use_gpu` flag in `BatchPredictor`.

Notable changes:
- Added `use_gpu` flag in the constructor of `TorchPredictor` and `TensorflowPredictor` (note it's slightly different from our latest design doc that puts this flag at `predict()` call)
- Added `use_gpu` flag to `SklearnPredictor` so its interface is compatible with `BatchPredictor`
- Code to move both model weights and input tensor to default visible GPU at index 0 if flag is set 
- parametrized existing predictor tests to use GPU for both CPU & GPU coverage
- Changed BUILD CI tests with an added `gpu` tag (I'm not 100% sure if that's a right way tho)

Follow ups:

https://github.com/ray-project/ray/issues/26249 is created in case our host has multiple GPU devices. It's a bit out of scope for this PR, but for GPU batch inference ideally we should be able to evenly use all GPU devices on host where CPU & DRAM are busy with pre-fetching + data movement to GPU. We might approximately do the same by scheduling same # of Predictor instances on the host, but that's worth verifying once benchmarks are set.
2022-07-11 13:04:15 -07:00
Kai Fricke
753f5feaf4
[tune] Remove TrialCheckpoint class (#25406)
The old user-facing TrialCheckpoint class has been deprecated in favor of `ray.ml.Checkpoint` and will be removed with this PR.

The main change in this PR is to delete the old `TrialCheckpoint` class and replace remaining API calls (e.g. `checkpoint.local_path`) with the correct AIR equivalents.

One issue that comes up is that with Ray client usage, checkpoint directories are not available on the local node (the client). Thus, we can't construct `Checkpoint` objects easily. (Previously, the TrialCheckpoint object held a reference to the location, even if it is not locally available). There are ongoing discussions on how to resolve this in the future. For now, we print an error when such a checkpoint is requested.

Depends on #25805

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-11 20:08:10 +01:00
Jian Xiao
923209895d
Pipelined training test: change num of windows; log the ingestion perf (#26429)
Why are these changes needed?
Improve test perf
Log the perf stats
With 2 windows there are a lot of spilling, slowing down the throughput.
2022-07-11 11:03:35 -07:00
xwjiang2010
c97d65e64f
[tune] fix hebo_example. (#26439)
Fixes a bug in the the ipython notebook.
2022-07-11 17:12:10 +01:00
Amog Kamsetty
b01e11d721
[Docker] Add support for Cuda 11.3 (#26233)
Start building Ray docker images with cuda 11.3
2022-07-10 21:50:42 -07:00
Philipp Moritz
dae4ec2f23
Fix dashboard link in HTML reprs for ClientContext and WorkerContext (#26431)
This fixes the dashboard link in https://github.com/ray-project/ray/pull/25730 -- without this I'm getting

<img width="1378" alt="Screen Shot 2022-07-09 at 8 08 06 PM" src="https://user-images.githubusercontent.com/113316/178129698-7ef19ee3-d577-4fd9-a4d5-0cee1ca35f5f.png">

because Jupyter is interpreting the URL as relative to the notebook URL.
2022-07-09 23:30:40 -07:00
Richard Liaw
5892a76a44
[air/tune] Documentation testing fixes (#26409) 2022-07-09 19:47:21 -07:00
Yi Cheng
a68c02a15d
[dashboard][2/2] Add endpoints to dashboard and dashboard_agent for liveness check of raylet and gcs (#26408)
## Why are these changes needed?
As in this https://github.com/ray-project/ray/pull/26405 we added the health check for gcs and raylets.

This PR expose them in the endpoint in dashboard and dashboard agent.

For dashboard, we added `http://host:port/api/gcs_healthz` and it'll send RPC to GCS directly to see whether the GCS is alive or not.

For agent, we added `http://host:port/api/local_raylet_healthz` and it'll send RPC to GCS to check whether raylet is alive or not.

We think raylet is live if
- GCS is dead
- GCS is alive but GCS think the raylet is dead

If GCS is dead for more than X seconds (60 by default), raylet will just crash itself, so KubeRay can still catch it.
2022-07-09 13:09:48 -07:00
Philipp Moritz
2f28d05f29
[Doc] Fix docs feedback button (#26402) 2022-07-09 09:35:06 -07:00
Yi Cheng
39cb1e5f97
[core][1/2] Improve liveness check in GCS (#26405)
CheckAlive in GCS is only for checking GCS's liveness. But we also need to check the liveness for raylet.

In KubeRay, we can check the liveness directly by monitoring the raylet's liveness. But it's not good enough given that raylet's process liveness is not directly related to raylet's liveness.

For example, during a network partition, raylet is not able to connect to GCS. GCS mark raylet as dead. So for the cluster, although raylet process is still alive, it can't be treated as alive because GCS has told all the nodes that it's dead.

So for KubeRay, it also needs to talk with GCS to check whether it's alive or not.

This PR extends the CheckAlive API to include raylet address so that we can query GCS to get the cluster status directly.
2022-07-09 16:32:31 +00:00
Jun Gong
0c469e490e
[RLlib] Checkpoint and restore connectors. (#26253) 2022-07-09 01:06:24 -07:00
Siyuan (Ryans) Zhuang
7fcf0adebb
[Workflow] Minor refactoring of workflow exceptions (#26398)
* minor refactoring
2022-07-09 00:46:43 -07:00
Siyuan (Ryans) Zhuang
b0e913fd07
[workflow] Workflow queue (#24697)
* implement workflow queue
2022-07-08 17:24:45 -07:00