Commit graph

7218 commits

Author SHA1 Message Date
Nikita Vemuri
3a3e6bb60b
[tune] Add external hooks in WandbLoggerCallback (#26617)
This is an experimental feature, so the following changes are added only to the WandbLoggerCallback. We are planning to collect feedback about usage and accordingly update or add these changes to the other W&B integration interfaces.

    Allow reading the W&B project name and group name from environment variable if not already passed to callback
    Add external hooks to fetch W&B API key, and to process any information about W&B run


Signed-off-by: Nikita Vemuri <nikitavemuri@gmail.com>
2022-07-16 22:35:53 +01:00
truelegion47
5bd8d121b2
Add type validation for ray.autoscaler.sdk.request_resources()(#26626)
Adds type validation to ray.autoscaler.sdk.request_resources().
2022-07-16 12:40:05 -07:00
Eric Liang
605bc29f11
[air/predictors] Allow creating Predictor directly from a UDF (#26603) 2022-07-16 10:48:09 -07:00
Clark Zinzow
fb54679a23
[Datasets] Refactor split_at_indices() to minimize number of split tasks and data movement. (#26363)
The current Dataset.split_at_indices() implementation suffers from O(n^2) memory usage in the small-split case (see issue) due to recursive splitting of the same blocks. This PR implements a split_at_indices() algorithm that minimizes the number of split tasks and data movement while ensuring that at most one block is used in each split task, for the sake of memory stability. Co-authored-by: scv119 <scv119@gmail.com>
2022-07-16 04:48:44 -07:00
SangBin Cho
0f0102666a
[Core] Support max cpu allocation per node for placement group scheduling (#26397)
The PR adds a new experimental flag to the placement group API to avoid placement group taking all cpus on each node. It is used internally by Air to avoid placement group (created by Tune) is using all CPU resources which are needed for dataset
2022-07-16 01:47:30 -07:00
Balaji Veeramani
34cf1f17ea
[Datasets] Add ImageFolderDatasource (#24641)
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-15 22:43:23 -07:00
matthewdeng
9256668b90
Revert "[Datasets] Explicitly define Dataset-like APIs in DatasetPipeline class (#26394)" (#26625) 2022-07-15 21:10:59 -07:00
Eric Liang
cf980c3020
[data] Refactor all to all op implementations into a separate file (#26585) 2022-07-15 18:17:48 -07:00
Cheng Su
fea94dc976
[Datasets] Explicitly define Dataset-like APIs in DatasetPipeline class (#26394)
This PR is to resolve #20888, where users have concern for the dataset-like methods used in dataset pipeline (such as map_batches, random_shuffle_each_window, etc). The reason is currently we define those dataset-like methods implicitly through Python setattr/getattr, to delegate the real work from dataset pipelien to dataset. This does not work very well with external developers/users if they want to navigate to the definition of method, or determine the method's return value data type.

So this PR is to explicitly define every dataset-like APIs in dataset pipeline class. This gives us a view of how much code we need to duplicate in upper bound. If we go with this direction, this means whenever we update or add a new method in Dataset, we need to update or add the same in DatasetPipeline.
2022-07-15 16:12:27 -07:00
Sihan Wang
09a6e5336a
[Serve][Part2] Migrate the tests to use deployment graph api (#26507) 2022-07-15 15:48:43 -07:00
Chen Shen
fe9a12aa92
Revert "[KubeRay][Autoscaler][Core] Add a flag to disable ray status version check (#26584)" (#26597)
Reverts #26584

Seems it breaking test_advanced_4
2022-07-15 15:47:28 -07:00
Siyuan (Ryans) Zhuang
964bc90e09
[Workflow] Remove workflow execution module (#26504)
* remove workflow execution module

Signed-off-by: Siyuan <suquark@gmail.com>
2022-07-15 14:52:03 -07:00
Edward Oakes
0ecc7dad74
Revert "Revert "[serve] Use soft constraint for pinning controller on head node (#25091)" (#25857)" (#25858) 2022-07-15 14:07:24 -05:00
Cheng Su
5a95e11e1e
[Datasets] Improve read_xxx experience of HTTP file (#26454) 2022-07-15 10:39:39 -07:00
michalsustr
ca3d272c3e
Print newest_ckpt_path when resuming trial. (#26561)
When trail is resumed, it is useful for the user to know from which checkpoint it happened.

Signed-off-by: sustr-equi <sustr@equilibretechnologies.com>
Co-authored-by: sustr-equi <sustr@equilibretechnologies.com>
2022-07-15 10:52:50 +01:00
Hao Chen
8fd0d39f06
Fix test_serialization_error_message for pytest 6.x (#26591) 2022-07-15 17:37:14 +08:00
Guyang Song
1949f35901
[runtime env] plugin refactor[4/n]: remove runtime env protobuf (#26522) 2022-07-15 13:56:12 +08:00
Clark Zinzow
5a81871820
Improve streaming read performance for default configuration. (#26587)
Signed-off-by: Clark Zinzow <clarkzinzow@gmail.com>
2022-07-14 21:25:21 -07:00
Simon Mo
df9f891416
[Serve] User custom class name for replica class (#26574) 2022-07-14 20:10:56 -07:00
Dmitri Gekhtman
a304d1c145
[KubeRay][Autoscaler][Core] Add a flag to disable ray status version check (#26584)
Adds a flag that disables the version check in ray health-check.
2022-07-14 19:56:16 -07:00
Tao Wang
6ddbdaa81a
[CI]Split C++, Java tests in MacOS from the big one (#26434) 2022-07-14 18:33:47 -07:00
Simon Mo
ef1d5c9a97
[Serve][AIR] Fix pandas_read_json compatibility issue (#26494) 2022-07-14 15:29:14 -07:00
Antoni Baum
7cc6542205
[AIR/Train] HuggingFacePredictor improvements (#26531)
Co-authored-by: Amog Kamsetty <amogkam@users.noreply.github.com>
2022-07-14 13:20:31 -07:00
Antoni Baum
4273d2235e
[AIR] Improve to_air_checkpoint with path (#26532) 2022-07-14 13:20:21 -07:00
Eric Liang
40be6904a5
[data] Avoid under-parallelization regressions and add better testing for parallelism detection (#26543)
In the previous PR #25883, a subtle regression was introduced in the case where data sizes blow up significantly.

For example, suppose you're reading jpeg-image files from a Dataset, which increase in size substantially on decompression. On a small-core cluster (e.g., 4 cores), you end up with 4-8 blocks of ~200MiB each when reading a 1GiB dataset. This can blow up to OOM the node when decompressed (e.g., 25x size increase).

Previously the heuristic to use parallelism=200 avoids this small-node problem. This PR avoids this issue by (1) raising the min parallelism back to 200. As an optimization, we also introduce the min block size threshold, which allows using fewer blocks if the data size is really small (<100KiB per block).
2022-07-14 13:02:52 -07:00
Tim Gates
e42dc7943e
docs: Fix a few typos (#26556)
There are small typos in:
- doc/source/data/faq.rst
- python/ray/serve/replica.py

Fixes:
- Should read `successfully` rather than `succssifully`.
- Should read `pseudo` rather than `psuedo`.
2022-07-14 12:38:33 -07:00
Jiajun Yao
60dd77a2d3
Enable usage stats collection for ray.init iff nightly wheels (#26461)
For nightly wheels, we want to collect usage stats for local clusters started via ray.init() as well.
2022-07-14 12:14:01 -07:00
Amog Kamsetty
6595bd6e2d
[AIR] Introduce better scoring API for BatchPredictor (#26451)
Signed-off-by: Amog Kamsetty <amogkamsetty@yahoo.com>

As discussed offline, allow configurability for feature columns and keep columns in BatchPredictor for better scoring UX on test datasets.
2022-07-14 11:26:12 -07:00
Richard Liaw
a0ce3c111b
[air/data] Concatenator preprocessor (#26526) 2022-07-14 10:26:14 -07:00
Antoni Baum
c168c09281
[Tune] Restore old max concurrent logic in BOHB (#26529)
As discussed on Ray Slack (https://ray-distributed.slack.com/archives/CNECXMW22/p1657051287814569), the changes introduced in #18770 and #20822 have caused the concurrency limiting logic in BOHB to work incorrectly. This PR restores the old logic, while making use of the set_max_concurrency API (as eg. HEBO), maintaining backwards compatibility.

It should be noted that the old logic this PR reintroduces is essentially a hack and should be refactored in the future. This PR is intended to rapidly fix a bug causing search performance to be suboptimal.

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-07-14 15:40:51 +01:00
Ricky Xu
c54916bc0f
[Core | State Observability] Add feedback prompt for dogfooding alpha (#26450)
Embed console print to gather dogfooding feedback. 

With CLIs: 
```
(dev) ➜  ray git:(ricky/obs-feedback) ray list --help
Usage: ray list [OPTIONS] {actors|jobs|placement-
                groups|nodes|workers|tasks|objects|runtime-envs}

      List RESOURCE used by Ray.

      RESOURCE is the name of the possible resources from `StateResource`,
      i.e. 'jobs', 'actors', 'nodes', ...



  ==========ALPHA PREVIEW, FEEDBACK NEEDED ===============

  State Observability APIs is currently in Alpha-Preview.

  If you have any feedback, you could do so at either way as below:

    1. Comment on API specification: https://tinyurl.com/api-spec

    2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46

    3. Follow up in #proj-state-obs-dogfooding slack channel.

  ==========================================================
```


With running SDK python api:
```
In [3]: from ray.experimental.state.api import list_nodes
In [6]: list_nodes()
2022-07-11 19:45:18,973 INFO api.py:69 -- 
==========ALPHA PREVIEW, FEEDBACK NEEDED ===============
State Observability APIs is currently in Alpha-Preview. 
If you have any feedback, you could do so at either way as below:
  1. Comment on API specification: https://tinyurl.com/api-spec
  2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46
  3. Follow up in #proj-state-obs-dogfooding slack channel.
==========================================================
Out[6]: 
[{'node_name': '172.31.47.143',
  'node_ip': '172.31.47.143',
  'resources_total': {'CPU': 8.0,
   'object_store_memory': 9149783654.0,
   'memory': 18299567310.0,
   'node:172.31.47.143': 1.0},
  'node_id': '513a3ca212403d234f6dfbe1f7523052637a06e0ee9e4502144f2da3',
  'state': 'ALIVE'}]

```
2022-07-14 06:45:07 -07:00
SangBin Cho
e9f6ffc5a5
[Core][State Observability] Use address arg + print warning if API responds slowly (#26008)
This PR is doing 2 things.

(1) Use api_server_url to address which is consistent to other submission APIs.
(2) When the API is not responded timely, it prints a warning every 5 seconds. Below is an example. This is useful when the API is slowly responded (e.g., when there are partial failures). Without this users will see hanging API for 30 seconds, which is a pretty bad UX.

(0.12 / 10 seconds) Waiting for the response from the API server address http://127.0.0.1:8265/api/v0/delay/5.
2022-07-14 06:44:07 -07:00
Antoni Baum
8f74e1f3ae
[AIR] Use cls in from_checkpoint (#26534)
Uses `cls` in `from_checkpoint` classmethods for better subclass development experience.
2022-07-14 00:15:48 -07:00
Cheng Su
f852ec82bf
[Datasets] Fix Parquet in-memory file size estimation (#26516) 2022-07-13 23:32:13 -07:00
Eric Liang
f2401a14d9
[air] Explicitly list out the args for BatchPredictor.predict_pipelined (#26551)
Signed-off-by: Eric Liang <ekhliang@gmail.com>
2022-07-13 22:30:32 -07:00
Scott Cheng
1bc44c13fb
Update Python3.10 in docs (#26463)
Make it clear to users that ray supports Python 3.10
2022-07-13 20:08:56 -07:00
Stephanie Wang
6ef26cd8ff
[core] Cancel pending dependency resolution before failing a task (#26267)
Actor tasks are sometimes failed while their dependencies are still being resolved. This can cause hanging or crashes when we resolve the dependencies for a task that has already been canceled. It can lead to a crash from the ref counter when, for the same actor, actor task 1 depends on actor task 2. The sequence is:

    Actor tasks 1 and 2 queued, 1 depends on 2.
    Fail actor task 1. We clear its refs, including its dependency on 2.
    Fail actor task 2. We store an error as its return value. Since task 1 depends on it, we inline the dependency and try to clear task 1's refs again, causing a ref counting error because we already cleared them in step 2.

This PR fixes the issue by canceling dependency resolution for tasks before failing them. This involves some refactoring of the LocalDependencyResolver. Most of the changes are for testing (split out the unit tests for LocalDependencyResolver into their own suite).
Related issue number

Closes #18908.
2022-07-13 14:39:11 -07:00
Sihan Wang
b606169cb5
[Serve] Promote autoscaling feature (#26393)
1. get rid of the private attribute
2. fix unit test
3. docs and workflows
2022-07-13 14:38:38 -05:00
Sven Mika
ab10890e90
Revert "Bump pytest from 5.4.3 to 7.0.1" (breaks lots of RLlib tests for unknown reasons) (#26517) 2022-07-13 11:19:30 -07:00
Antoni Baum
cc7115f6a2
[Tune/CI] Fix tune-sklearn notebook example (#26470)
Fixes the tune-sklearn notebook example as found in #26410

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2022-07-13 18:14:36 +01:00
Sihan Wang
e2cac0b324
[Serve][Part1] Update the tests to use graph deploy (#26310) 2022-07-13 09:53:51 -07:00
Ricky Xu
365ffe21e5
[Core | State Observability] Implement API Server (Dashboard) HTTP Requests Throttling (#26257)
This is to limit the max number of HTTP requests the dashboard (API server) will accept before rejecting more requests.
This will make sure the observability requests do not overload the downstream systems (raylet/gcs) when delegating too many concurrent state observability requests to the cluster.
2022-07-13 09:05:26 -07:00
Antoni Baum
ddb5572040
[Tune/CI] Fix Hyperopt notebook example (#26469)
Fixes failing hyperopt notebook in CI (as found in #26410). The cause was a mismatch between keys in points to evaluate and the search space - now, an informative exception will be raised.

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2022-07-13 16:50:11 +01:00
Amog Kamsetty
8ca5584b9f
Annotate more api (#26501) 2022-07-12 22:29:14 -07:00
clarng
b2cdd45e7c
Update import sorting blacklist, enable sorting for experimental dir (#26101)
Why are these changes needed?
There are directories that we don't lint / format. Ensure they are also the case for the import sorting tool.

Enable sorting for python/experimental to show case how to enable sorting for a directory as we convert more of the directories to be automatically sorted by the tool.
2022-07-12 21:25:58 -07:00
Riatre
2cdb76789e
Bump pytest from 5.4.3 to 7.0.1 (#26334)
See #23676 for context. This is another attempt at that as I figured out what's going wrong in `bazel test`. Supersedes #24828.

Now that there are Python 3.10 wheels for Ray 1.13 and this is no longer a blocker for supporting Python 3.10, I still want to make `bazel test //python/ray/tests/...` work for developing in a 3.10 env, and make it easier to add Python 3.10 tests to CI in future.

The change contains three commits with rather descriptive commit message, which I repeat here:

Pass deps to py_test in py_test_module_list

    Bazel macro py_test_module_list takes a `deps` argument, but completely
    ignores it instead of passes it to `native.py_test`. Fixing that as we
    are going to use deps of py_test_module_list in BUILD in later changes.

    cpp/BUILD.bazel depends on the broken behaviour: it deps-on a cc_library
    from a py_test, which isn't working, see upstream issue:
    https://github.com/bazelbuild/bazel/issues/701.
    This is fixed by simply removing the (non-working) deps.

Depend on conftest and data files in Python tests BUILD files

    Bazel requires that all the files used in a test run should be
    represented in the transitive dependencies specified for the test
    target. For py_test, it means srcs, deps and data.

    Bazel enforces this constraint by creating a "runfiles" directory,
    symbolic links files in the dependency closure and run the test in the
    "runfiles" directory, so that the test shouldn't see files not in the
    dependency graph.

    Unfortunately, the constraint does not apply for a large number of
    Python tests, due to pytest (>=3.9.0, <6.0) resolving these symbolic
    links during test collection and effectively "breaks out" of the
    runfiles tree.

    pytest >= 6.0 introduces a breaking change and removed the symbolic link
    resolving behaviour, see pytest pull request
    https://github.com/pytest-dev/pytest/pull/6523 for more context.

    Currently, we are underspecifying dependencies in a lot of BUILD files
    and thus blocking us from updating to newer pytest (for Python 3.10
    support). This change hopefully fixes all of them, and at least those in
    CI, by adding data or source dependencies (mostly for conftest.py-s)
    where needed.

Bump pytest version from 5.4.3 to 7.0.1

    We want at least pytest 6.2.5 for Python 3.10 support, but not past
    7.1.0 since it drops Python 3.6 support (which Ray still supports), thus
    the version constraint is set to <7.1.

    Updating pytest, combined with earlier BUILD fixes, changed the ground
    truth of a few error message based unit test, these tests are updated to
    reflect the change.

    There are also two small drive-by changes for making test_traceback and
    test_cli pass under Python 3.10. These are discovered while debugging CI
    failures (on earlier Python) with a Python 3.10 install locally.  Expect
    more such issues when adding Python 3.10 to CI.
2022-07-12 21:14:35 -07:00
Eric Liang
9de1add073
[Datasets] Autodetect dataset parallelism based on available resources and data size (#25883)
This PR defaults the parallelism of Dataset reads to `-1`. The parallelism is determined according to the following rule in this case:
- The number of available CPUs is estimated. If in a placement group, the number of CPUs in the cluster is scaled by the size of the placement group compared to the cluster size. If not in a placement group, this is the number of CPUs in the cluster. If the estimated CPUs is less than 8, it is set to 8.
- The parallelism is set to the estimated number of CPUs multiplied by 2.
- The in-memory data size is estimated. If the parallelism would create in-memory blocks larger than the target block size (512MiB), the parallelism is increased until the blocks are < 512MiB in size.

These rules fix two common user problems:
1. Insufficient parallelism in a large cluster, or too much parallelism on a small cluster.
2. Overly large block sizes leading to OOMs when processing a single block.

TODO:
- [x] Unit tests
- [x] Docs update

Supercedes part of: https://github.com/ray-project/ray/pull/25708

Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
2022-07-12 21:08:49 -07:00
Clark Zinzow
12ea100527
Revert "Object GC for block splitting inside the dataset splitting (#26196)" (#26495)
This reverts commit 45ba0e3cac.

Failures in the Train GPU job started popping up involving lost references around when this PR was merged; there was an ongoing failure that was reverted that overlaps this PR, but this PR is the most likely culprit for this particular lost reference issue, so we should try reverting the PR.

- Flakey test tracker: https://flakey-tests.ray.io/
- Example failure: https://buildkite.com/ray-project/ray-builders-branch/builds/8585#0181f423-0fe2-42b5-9dd8-47d2c7f9efa7
2022-07-12 18:44:51 -07:00
brucez-anyscale
57258335bd
[Serve] Fix test_cli flakiness (#26471) 2022-07-12 17:57:08 -07:00
Amog Kamsetty
e6c04031fd
Revert "[Train] Add support for handling multiple batch data types for prepare_data_loader (#26386)" (#26483)
This reverts commit 36229d1234.
2022-07-12 17:18:46 -07:00