Commit graph

6729 commits

Author SHA1 Message Date
Siyuan (Ryans) Zhuang
bf6b7f4395
[Workflow] Simplify recovery algorithm (#24594)
* simplify recovery algorithm
2022-05-09 22:03:31 -07:00
Kai Yang
4a999777fa
[Core] Allow accepting gRPC HTTP proxy via env variable (#23526) 2022-05-10 11:30:46 +08:00
Siyuan (Ryans) Zhuang
6e17c4a2b7
[core] More tests for setting options for Ray libraries (#24591)
* test

* update
2022-05-09 13:18:43 -07:00
Matti Picus
ffb67203e9
debug call_ray_start failure (#24252)
Exploring #24251. The call to the `call_ray_start` fixture seems to be timing out in `test_ray_init`.
2022-05-09 13:28:28 -05:00
Simon Mo
07986349c6
[Serve] Run health check in separate thread (#24560) 2022-05-09 09:59:37 -07:00
Kai Fricke
76da2255d9
[air/rllib] Add RL serving example (#24215)
This PR includes the changes from #24172

This PR adds an end-to-end training and serving example for the RLTrainer/RLPredictor. It also adds an `RLServeEnv` that can be used as an external env for rllib inference, querying the served policy from the RLPredictor.

This draft PR runs end to end, but I'd like to gather some initial feedback before promoting it to a full PR.
2022-05-09 16:44:49 +02:00
Artur Niederfahrenhorst
bc8742792c
[Tune] Logging of bad results dict keys (#23954)
[User complains](https://discuss.ray.io/t/which-attributes-can-be-used-in-checkpoint-score-attr-when-using-tune-run/5826) about logging on failure of locating `checkpoint_score_attr ` in results dict not being informative.
I propose that we log the actual results dict keys and extended stopping criteria, which imho should not log the whole result dict as this might contain tensors.

Maybe there are other similar cases in tune library, in which I don't know my way around that good.
2022-05-09 11:54:11 +02:00
Dmitri Gekhtman
e3db45eb86
[hotfix][kuberay][docs] Match up Ray versions in example config (#24580)
This PR fixes a typo in the KubeRay example config in Ray's docs.

Specifics:
Ray versions in the Ray repo's example KubeRay CR were recently updated from 1.11.0 to 1.12.0.
However, the worker group's Ray version was accidentally left at 1.11.0. This leads to alarming crash-looping when deploying the example in the docs.

This PR matches up the Ray images by setting the worker group to rayproject/ray:1.12.0.
2022-05-08 16:01:34 -07:00
Linsong Chu
5964a58d84
[Workflow] Enable auto-config for persistent storage when connecting to existing cluster (#24490)
* fix init() requires hardcoded storage path when connecting to existing cluster

* update tests with new init(storage) behavior

* update tests with latest api behavior
2022-05-08 15:42:29 -07:00
Jiajun Yao
d462172be7
Add doc for actor spread scheduling (#24552)
grant_or_reject for raylet based actor scheduling is implemented as part of #23829, so spread scheduling now works for actors just like tasks.
2022-05-06 21:36:47 -07:00
Jian Xiao
78cab9f0f1
Test the CSV read with column types specified (#24398)
Make sure users can read csv with columns types specified.
Users may want to do this because sometimes PyArrow's type inference doesn't work as intended, in which case users can step in and work around the type inference.
2022-05-06 21:29:11 -07:00
Simon Mo
95c11c97ef
[Serve] Ensure SimpleSchemaIngress uses FastAPI custom serializers (#24549) 2022-05-06 14:17:36 -07:00
Antoni Baum
668049492c
[Datasets] Add from_huggingface for Hugging Face datasets integration (#24464)
Adds a from_huggingface method to Datasets, which allows the conversion of a Hugging Face Dataset to a Ray Dataset. As a Hugging Face Dataset is backed by an Arrow table, the conversion is trivial.
2022-05-06 13:09:28 -07:00
Siyuan (Ryans) Zhuang
84ccab2d5f
[workflow] Defining and updating workflow options (#24498)
* implement "options" for workflow

* update tests
2022-05-06 13:08:22 -07:00
Charles Greer
189f7a469b
change docs for ray.remote num_gpus (#24551)
The documentation says that @ray.remote can take fractional num_gpus which is true, but the documentation lists it as an integer. I think this is strictly a problem in the docs.
2022-05-06 11:04:11 -07:00
Kai Fricke
5d9bf4234a
[air] Example to track runs with Weights & Biases (#24459)
This PR 
- adds an example on how to run Ray Train and log results to weights & biases
- adds functionality to the W&B plugin to store checkpoints
- fixes a bug introduced in #24017
- Adds a CI utility script to setup credentials
- Adds a CI utility script to remove test state from external services cc @simon-mo
2022-05-06 15:52:37 +01:00
Antoni Baum
c5e1851ab9
[Tune] Improve JupyterNotebookReporter (#24444)
Improves Tune Jupyter notebook experience by modifying the `JupyterNotebookReporter` in two ways:
* Previously, the `overwrite` flag controlled whether the entire cell would be overwritten with the updated table. This caused all the other logs to be cleared. Now, we use IPython display handle functionality to create a table at the top of the cell and update only that, preserving the rest of the output. The `overwrite` flag now controls whether the cell output *prior* to the initialization of `JupyterNotebookReporter` is overwritten or not.
* The Ray Client detection was not working unless the user specifically passed a `JupyterNotebookReporter` as the `progress_reporter`. Now, the default value allows for correct detection of the enviroment while running Ray Client.

Furthermore, the progress reporter detection logic in `rllib/train.py` has been replaced to make use of the `detect_reporter` function for consistency with Tune (the sign in the overwrite condition was similarly flipped).
2022-05-06 11:52:47 +01:00
Siyuan (Ryans) Zhuang
417b72efdc
[workflow] Update workflow docs (#24249)
* update workflow docs

* rename "step" to "task"
2022-05-05 22:22:51 -07:00
Chris K. W
5a7c5ab79c
[client] fix OOM caused by debug log (#24477)
When this line tries to format the task into the string, it also attempts to format all of the serialized arguments passed to the task, which can be memory intensive, even if the debug log never gets displayed. Switch to only logging the task name, type and payload_id.

Repro script if you want to see how big a difference commenting out the debug log makes (takes up about 8GiB swap on my machine):
```
import ray
import numpy as np
import logging
ray.init("ray://localhost:10001")

@ray.remote
def run_ray_remote(np_array):
    return np_array.shape

a = np.random.random((1024, 1024, 128))  # approx 1GiB
b = run_ray_remote.remote(a) 
c = ray.get(b)
print(c)

```
2022-05-05 16:37:39 -07:00
Simon Mo
a424e91aba
[Serve] Support serializing numpy scaler (#24512) 2022-05-05 10:46:01 -07:00
Siyuan (Ryans) Zhuang
b3c93b91b0
[Serve] Reuse existing validation functions for Ray Serve config & bug fix (#24265)
* set default cpus in ray_actor_options

* remove unnecessary tests

* update message
2022-05-04 23:17:44 -07:00
Siyuan (Ryans) Zhuang
7a48d708d5
[core] Update metadata in options properly (#24458)
* implement proper updating of metadata in options
2022-05-04 23:11:36 -07:00
Frank Luan
af1684af51
[Storage] Fix spill/restore error when using Arrow S3FS (#24196) 2022-05-04 19:06:36 -07:00
mwtian
b02029b29f
[Core] allow using grpcio > 1.44.0 (#23722) 2022-05-04 19:06:11 -07:00
Kai Fricke
b05531177c
[tune/ci] Fix GRPC resource exhausted test for tune trainables (#24467)
#24421 increased the default maximum GRPC limit to 250MB, which broke a Tune test that catches too large training functions.

This PR fixes this test by increasing the size of the experiment. However, please note that this leads to an inconsistency: For training functions of size 100 < fn < 250, an error will be raised only at runtime when trying to start the actor:

```
ValueError: The actor ImplicitFunc is too large (125 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
```

But it will successfully pass the registration stage `self._run_identifier = Experiment.register_if_needed(run)`.

cc @ericl should we set the default limit back to 100 MB (or maybe set the FUNCTION_SIZE_ERROR_THRESHOLD to 250 or whatever the GRPC limit is?)
2022-05-04 18:32:13 +01:00
Jiajun Yao
6bd65ceb1c
Fix flaky test_locality_aware_leasing_borrowed_objects (#24452)
The test is flaky because we schedule g task without waiting for f task to complete (because f_obj is embedded inside a list) so we may not have the locality information for f_obj from owner during g task scheduling.

Related issue number

Closes #23964
2022-05-04 10:12:31 -07:00
Archit Kulkarni
b79b8340f0
Don't redefine runtime_env in actor, to skip bug (#24448)
test_usage_stats was very flaky due to a runtime env setup error.

The test defined the runtime env {pip: "ray[serve]"} simultaneously in ray.init() and also in ray.remote() for the actor. This is redundant but should be supported by runtime_env, but it turns out it reveals a bug in runtime_env. The env appears to be installed twice concurrently in this situation, causing flakiness.

I'll make a followup issue for the runtime env bug with more details and a simpler repro, and link it here. Until then, we should merge this PR to deflake CI. This PR only defines the runtime_env in ray.init(), and removes the redefinition in ray.remote(). The actor will still inherit the correct runtime environment.

I tested manually by inspecting dashboard_agent.log locally. The virtualenv install commands were duplicated about 75% of the time before this PR, indicating the concurrent install. But with this change, the commands were never duplicated in the 7-8 times that I ran it. So this PR should deflake the test.
2022-05-04 09:51:53 -07:00
Simon Mo
21d76c4ca4
[Serve] Add short-hand for pydantic http adapter (#24404) 2022-05-04 09:43:18 -05:00
Antoni Baum
7ea00b282a
[AIR] Allow users to configure verbosity (#24443)
Makes verbosity a configurable parameter in RunConfig.
2022-05-04 15:43:01 +01:00
Kai Fricke
21f8c68c8d
[ci] Try/except pytest makereport (#24462)
kai.fricke@mailbox.org
2022-05-04 14:02:27 +01:00
Alyetama
5f43906b5d
Fix broken documentation URL (#24437)
The link to the documentation to troubleshoot TypeError: Could not serialize the function ... is broken. This PR fixes the link by replacing it with the correct URL.
2022-05-03 21:44:31 -07:00
Simon Mo
dccea240e8
[Serve] Unify Starlette and FastAPI JSON serialization stack (#24417) 2022-05-03 15:17:42 -07:00
Eric Liang
5bdd9e4be5
[minor] Make the max runtime_env size configurable (#24421) 2022-05-03 11:13:04 -07:00
Kai Fricke
4cec228657
[ci] Print pip environment in failed test annotations (#24427)
For better debugging, we should print the installed pip packages in the buildkite annotations. Additionally, shorten the summary message to make the output less cluttered.
2022-05-03 17:47:02 +01:00
Kai Fricke
c339f19b0f
[tune] Always sync down trial after completion (#24389)
As a follow-up from #12590, we should also always sync down after a trial terminated and clean up the trial syncer object after closing.
2022-05-03 15:32:44 +01:00
Linsong Chu
e8fc66af34
[Workflow]Make workflow logs publish to the correct driver. (#24089)
All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions.

To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
2022-05-02 19:53:57 -07:00
Antoni Baum
292dcad7dd
[AIR] Improve reporting in HuggingFaceTrainer (#24397)
The previous implementation of the reporting logic in HuggingFaceTrainer had a few edge cases that caused the training iterations and measured epochs to diverge. This new implementation should ensure that reporting is consistent.
2022-05-02 19:46:15 -07:00
Siyuan (Ryans) Zhuang
1282ae15d9
[workflow] Enable workflow storage test with cluster (#24401)
* update
2022-05-02 16:19:50 -07:00
xwjiang2010
3c9e704e83
[tuner] Integrate with serialize_lineage. (#24229)
Also add back the test to tune dataset.
2022-05-02 23:01:49 +01:00
SangBin Cho
2bce07d4ce
[State API] List runtime env API (#24126)
This PR supports list runtime env API
2022-05-02 14:01:00 -07:00
Sihan Wang
59debac670
[Serve] Move deployment clean up under serve.run() api (#24306)
On the ServeHead level, it is talking to serve api and controller to do deployment and clean up now. With this pr, it hides the  deployment clean up logic into server.run() for code cleanness and easy to refactor in the future.
2022-05-02 12:10:11 -05:00
Dmitri Gekhtman
2aee537f92
[kuberay] Add a test of the Ray Job Submission API to the KubeRay e2e tests. (#24319)
This PR modifies the KubeRay e2e autoscaling test so that one of its scaling commands is sent via the Ray Job Submission API.

This validates that the Ray Job Submission API works with KubeRay and, in particular, that the Ray Dashboard is correctly exposed.
2022-05-02 10:04:16 -07:00
Sven Mika
d4a906e177
Issue 24143: Some f-strings missing f. (#24383) 2022-05-02 17:12:38 +02:00
Adrish Dey
d02b4cb2d6
Adding support to wandb service (#24017)
Updating W&B Ray Tune Integration with new standards. Adding support to wandb service, the soon to be default way for multiprocessing + wandb run logging.

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-05-02 15:47:08 +01:00
Edward Oakes
11954e6798
Issue 24143: Fix a few f-strings missing the f. (#24232) 2022-05-02 16:11:33 +02:00
Chong-Li
f3767131cb
[Enable gcs actor scheduler 1/n] Raylet and GCS schedulers share cluster_task_manager (#23829) 2022-05-02 21:45:23 +08:00
SangBin Cho
6f192b6e17
[Metrics] Allow to completely disable metrics collection (#24333)
This PR allows for Ray to disable metrics collection. It was possible with RAY_enable_metrics_collection, but it didn't fully disable collection because there was a metrics collection happening from agent that wasn't properly disabled. This PR also adds tests.
2022-05-02 05:33:03 -07:00
Eric Liang
38a46b71de
Add a hook that runs at the beginning of ray start (#24368) 2022-05-01 11:32:33 -07:00
Chris K. W
29ecffe805
[client] set log level to debug for actor errors (#24308)
Users get error messages from client/server on actor failures, even if they already try-except'd the error. For example:

```
import ray
ray.init("ray://localhost:10001")
try:
   ray.get_actor("doesnotexist")
except ValueError:
   pass
```

Will still generate the log `Caught schedule exception` and `Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.`. Reduce the level of these logs to debug by default.
2022-04-30 21:30:54 -07:00
Philipp Moritz
27917f570d
[runtime_env] Extend runtime_env hook to also cover jobs (#24328)
This extends https://github.com/ray-project/ray/pull/24036 to also cover job submission.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-04-30 09:15:51 -07:00