Commit graph

6862 commits

Author SHA1 Message Date
Kai Fricke
9a8c8f4889
[air/docs] Move some examples from ml/examples to docs (#24959)
This moves the basic LightGBM, Sklearn, and XGBoost examples from the examples/ folder to the docs. We keep a symlink in the examples folder.
2022-05-19 14:01:49 +01:00
mwtian
502c3e132d
Revert "[Core] allow using grpcio > 1.44.0 (#23722)" (#24935)
This reverts commit b02029b29f.
2022-05-18 18:16:39 -07:00
Balaji Veeramani
ebe2929d4c
[Datasets] Add example of using map_batches to filter (#24202)
The documentation says 

> Consider using .map_batches() for better performance (you can implement filter by dropping records).

but there aren't any examples of how to do so.
2022-05-18 16:28:15 -07:00
mwtian
b17cbd825f
[Core] fix prometheus export error and move GCS client heartbeat to a dedicated thread. (#24867)
I have seen errors where the prometheus view data dictionary is changed when iterating over it. So make a copy (which is atomic) before iterating.

Also, use a separate thread for GCS client heartbeat RPC. This avoids the issue of missing heartbeat when raylet is too busy.
2022-05-18 15:04:04 -07:00
SangBin Cho
2fa7a00588
[State Observability] Support output formatting (#24847)
This PR supports various output formatting. By default, we support yaml format. But this can be changed depending on the UX research we will conduct in the future.

1cb0f4b51a5799e0360a66db01000000:
  actor_id: 1cb0f4b51a5799e0360a66db01000000
  class_name: A
  state: ALIVE
f90ba34fa27f79a808b4b5aa01000000:
  actor_id: f90ba34fa27f79a808b4b5aa01000000
  class_name: A
  state: ALIVE
Table format is not supported yet. We will support this once we enhance the API output (which I will create an initial API review soon).
2022-05-18 15:00:40 -07:00
Dmitri Gekhtman
448f5273bc
[KubeRay][Autoscaler] Enable launching nodes in the main thread. (#24718)
This PR adds a flag to enable creating nodes in the main thread in the autoscaler. The flag is turned on for the KubeRay node provider. KubeRay already uses a flag to disable node updater threads -- with the changes from this PR, the autoscaler becomes single-threaded when launching on KubeRay.
2022-05-18 14:37:19 -07:00
Jiajun Yao
5128029644
Fix pull manager deadlock due to object reconstruction (#24791)
When an object is under reconstruction, pull manager keeps the bundle request active with no timeout, which may block the next bundle request that's needed for the object reconstruction. As a result, we have deadlock.

For example, task 1 takes object A as argument and returns object B, task 2 takes object B as argument. When we run task 2, pull manager will add B to the queue and then B is lost. In this case, task 1 is re-submitted and A is added the the pull manager queue after B (assuming both tasks are scheduled to the same node). Due to limited available object store memory, A cannot be activated until B is pulled but B cannot be pulled until A is pulled and B is reconstructed.

The solution is that if an active pull request has pending-creation objects, pull manager will deactivates it until creation is done. This way, we will free object store memory occupied by the current active pull request so that next requests can proceed and potentially unblock the object creation.
2022-05-18 13:43:23 -07:00
SangBin Cho
f228245520
[Placement group] Update the old placement group API usage to the new scheduling_strategy based API (#24544)
Documentation should use the new API, not the old one that will be deprecated
2022-05-18 09:41:51 -07:00
Qing Wang
eb29895dbb
[Core] Remove multiple core workers in one process 1/n. (#24147)
This is the 1st PR to remove the code path of multiple core workers in one process. This PR is aiming to remove the flags and APIs related to `num_workers`.
After this PR checking in, we needn't to consider the multiple core workers any longer.

The further following PRs are related to the deeper logic refactor, like eliminating the gap between core worker and core worker process,  removing the logic related to multiple workers from workerpool, gcs and etc.

**BREAK CHANGE**
This PR removes these APIs:
- Ray.wrapRunnable();
- Ray.wrapCallable();
- Ray.setAsyncContext();
- Ray.getAsyncContext();

And the following APIs are not allowed to invoke in a user-created thread in local mode:
- Ray.getRuntimeContext().getCurrentActorId();
- Ray.getRuntimeContext().getCurrentTaskId()

Note that this PR shouldn't be merged to 1.x.
2022-05-19 00:36:22 +08:00
Antoni Baum
1d5e6d908d
[AIR] HuggingFace Text Classification example (#24402) 2022-05-18 09:35:12 -07:00
Kai Fricke
c3d54de9ee
[ci] Fix runtime env tests (#24912)
This fixes failing tests from #24894
2022-05-18 11:51:07 +01:00
Amog Kamsetty
d73572dde3
[AIR] Add trainer_resources as a valid scaling config key for all Trainers
trainer_resources should be a valid scaling config key for all Trainers.
2022-05-18 11:15:53 +01:00
Simon Mo
9b2086c726
[Serve] Alias ray.serve.dag.InputNode (#24630) 2022-05-17 22:35:51 -07:00
Simon Mo
c3ac6fcf3f
Bump Ray Version from 2.0.0.dev0 to 3.0.0.dev0 (#24894) 2022-05-17 19:31:05 -07:00
Clark Zinzow
68d4dd3a8b
[Datasets] Add explicit resource allocation option via a top-level scheduling strategy (#24438)
Instead of letting Datasets implicitly use cluster resources in the margins of explicit allocations of other libraries, such as Tune, Datasets should provide an option for explicitly allocating resources for a Datasets workload for users that want to box Datasets in. This PR adds such an explicit resource allocation option, via exposing a top-level scheduling strategy on the DatasetContext with which a placement group can be given.
2022-05-17 15:01:15 -07:00
Jian Xiao
f86d5461c6
[Datasets] [CI] fix CI of dataset test (#24883)
CI test is broken by f61caa3. This PR fixes it.
2022-05-17 14:00:01 -07:00
SangBin Cho
6d978ab10e
[Core/Metrics] Accurately record used memory. (#24648)
Why are these changes needed?
used_memory_ includes the fallback allocation, so we should subtract it here to calculate the exact available memory.
2022-05-17 13:21:24 -07:00
Clark Zinzow
b1acf69a8a
Map progress bar title; pretty repr for rows. (#24672) 2022-05-17 12:24:36 -07:00
Kai Fricke
01af9c8c77
[tune] Fix has_resources_for_trial, leading to trials stuck in PENDING mode (#24878)
Tune resource bookkeeping was broken. Specifically, this is what happened in the repro provided in #24259:

- Only one PG can be scheduled per time
- We staged resources for trial 1
- We run trial 1
- We stage resources for trial 2
- We pause trial 1, caching the placement group
- This removes the staged PG for trial 2 (as we return the PG for trial 1)
- However, now we `reconcile_placement_groups`, which re-stages a PG
- both trial 1 and trial 2 are now not in `RayTrialExecutor._staged_trials`
- A staging future is still there because of the reconciliation

This PR fixes this problem in two ways. `has_resources_per_trial` will check a) also for staging futures for the specific trial, and b) will also consider cached placement groups.

Generally, resource management in Tune is convoluted and hard to debug, and several classes share bookkeeping responsibilities (runner, executor, pg manager). We should refactor this.
2022-05-17 18:21:45 +01:00
Robert
f61caa349c
Implement random_sample() (#24492) 2022-05-17 09:47:53 -07:00
Amog Kamsetty
fb832c3b1f
[AIR] Don't use df.transform for BatchMapper (#24872)
`df.transform` has undefined behavior when the passed in function mutates the dataframe, as mentioned in the pandas docs. This is because I believe the implementation iterates through slices of the dataframe and passes these slices to the provided function. This "gotcha" exposes an implementation to users who are using `BatchMapper`.

It's pretty common to have preprocessors that mutates the dataframe, for example our own test does the following 
```
    def add_and_modify_udf(df: "pd.DataFrame"):
        df["new_col"] = df["old_column"] + 1
        df["to_be_modified"] *= 2
        return df
```

so instead of using `df.transform`, we instead do `self.fn(df)`. As the `df` is the output of Ray Datasets `iter_batches`, the provided function can safely mutate the dataset.
2022-05-17 16:49:17 +01:00
xwjiang2010
5c8361a92e
[air] Update to use more verbose default config for trainers. (#24850)
Internal user feedback showing that more detailed logging is preferred:
https://anyscaleteam.slack.com/archives/C030DEV6QLU/p1652481961472729
2022-05-17 16:21:11 +01:00
shrekris-anyscale
3a2bd16eca
[Serve] Add deployment graph import_path and runtime_env to ServeApplicationSchema (#24814)
A newly planned version of the Serve schema (used in the REST API and CLI) requires the user to pass in their deployment graph's`import_path` and optionally a runtime_env containing that graph. This new schema can then pick up any `init_args` and `init_kwargs` values directly from the graph, instead of requiring them to be serialized and passed explicitly into the REST request.

This change:
* Adds the `import_path` and `runtime_env` fields to the `ServeApplicationSchema`.
* Updates or disables outdated unit tests.

Follow-up changes should:
* Update the status schemas (i.e. `DeploymentStatusSchema` and `ServeApplicationStatusSchema`).
* Remove deployment-level `import_path`s.
* Process the new `import_path` and `runtime_env` fields instead of silently ignoring them.
    * Remove `init_args` and `init_kwargs` from `DeploymentSchema` afterwards.

Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
2022-05-17 09:51:06 -05:00
Clark Zinzow
ea635aecd2
[Datasets] Support tensor columns in to_tf and to_torch. (#24752)
This PR adds support for tensor columns in the to_tf() and to_torch() APIs.

For Torch, this involves an explicit extension array check and (zero-copy) conversion of the tensor column to a NumPy array before converting the column to a Torch tensor.

For TensorFlow, this involves bypassing df.values when converting tensor feature columns to NumPy arrays, instead manually creating a single NumPy array from the column Series.

In both cases, I think that the UX around heterogeneous feature columns and squeezing the column dimension could be improved, but I'm saving that for a future PR.
2022-05-17 01:11:00 -07:00
Clark Zinzow
ef870e936c
[Datasets] Change range_arrow() API to range_table() (#24704)
This PR changes the ray.data.range_arrow() to ray.data.range_table(), making the Arrow representation an implementation detail.
2022-05-17 01:09:45 -07:00
Yi Cheng
379fa634ac
[core][2/2] Worker resubscribe when GCS failed (#24813)
A follow-up PR from this one: https://github.com/ray-project/ray/pull/24628

In the previous PR, it fixed the resubscribing issue for raylet. But there is also core worker which needs to do resubscribing.

There are two ways of doing resubscribe:
1. When the client-side detects any failure, it'll do resubscribing.
2. Server side will ask the client to do resubscribing.

1) is a cleaner and better solution. However, it's a little bit hard due to the following reasons:

- We are using long-polling, so for some extreme cases, we won't be able to detect the failure. For example, the client-side received the message, but before it sends another request, the server-side restarts, and the client will miss the opportunity of detecting the failure. This could happen if we have a standby GCS that starts very fast and somehow the client-side has a lot of traffic and runs very slow.
- The current gRPC framework doesn't give the user a way to handle failure which might need some refactoring on this one.

We can go with this way once we have gRPC streaming.

This PR is implementing 2) which includes three parts:
- raylet: (https://github.com/ray-project/ray/pull/24628)
- core worker: (this pr)
- python

Correctness: whenever when a worker started, it'll register to raylet immediately (sync call) before connecting to GCS. So, we just need to send all restart rpcs to registered workers and it should work because:
- if the worker just started and hasn't registered with the raylet: it's ok, because the worker hasn't connected with GCS yet, so no need to do resubscribing.
- if the worker has registered with the rayelt: it's covered by the code path here.
2022-05-16 23:47:52 -07:00
Antoni Baum
7158aeda33
[Datasets] Add Dataset.split_proportionately and ray.ml.train_test_split (#24476)
Adds a Dataset.split_proportionately method that allows the user to split a dataset using proportions. This is a very common use-case for eg. train-test splitting. The implementation is a thin wrapper over Dataset.split_at_indices.

Additionally, this PR adds a ray.ml.train_test_split function intended to provide a familiar API to ML practitioners.
2022-05-16 20:47:29 -07:00
Qing Wang
40774ac219
Minor changes for Java runtime env. (#24840) 2022-05-17 11:33:59 +08:00
Siyuan (Ryans) Zhuang
2766284b14
[workflow] Update workflow doc and examples (#24804)
* update doc of workflow options

* update examples and make sure they are working
2022-05-16 15:41:14 -07:00
Chen Shen
2e53f48188
[python3.10] build python310 wheels (#24829)
build python3.10 wheels for linux windows and mac.
2022-05-16 12:36:33 -07:00
Edward Oakes
f99aa5cb40
[serve][docs] Unify doc_code directories and add bazel target (#24736)
Split off from https://github.com/ray-project/ray/pull/24693/, unifying the redundant directories we had and making sure all `serve/doc_code` snippets are run in CI.
2022-05-16 09:49:42 -05:00
SangBin Cho
b9c30529d8
[Core/Observability 1/N] Add a "running" state to task status (#24651)
This PR adds 2 more states into TaskStatus

enum TaskStatus {
  // The task is scheduled properly and waiting for execution.
  // It includes time to deliver the task to the remote worker + queueing time
  // from the execution side.
  WAITING_FOR_EXECUTION = 5;
  // The task that is running.
  RUNNING = 6;
}
2022-05-16 05:39:05 -07:00
Yi Cheng
6df45f0978
[core][1/2] Resubscribe when GCS restarts for raylet. (#24628)
## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this solves. -->
This PR fixes the path to resubscribe to GCS when GCS restarts.

When GCS restarts, it'll lose all subscription information since everything is stored in memory. Then in the runtime, we need to tell GCS what's currently being subscribed.

The previous method:
- We'll have a thread in core worker/raylet to check whether the GCS restarted or not.
- If it restarted, we'll send resubscribe request to GCS.

However, this is not working in these cases:
- GCS restarts happen so fast so the checker in raylet/core worker missed them.
- GCS doesn't restart, but just being lag due to network issues then, the resubscribe is not necessary.

Actually, GCS knows when a resubscribe is needed: when it restarts. So the PR here is to send a resubscribe request from GCS -> Raylet and Raylet will do the resubscription.

There are two parts for this to work:

- [x] raylet send resubscription
- [ ] raylet ask core worker to send resubscription
2022-05-15 18:25:58 -07:00
Eric Liang
3f5d870541
[minor] Use np.searchsorted to speed up random access dataset (#24825) 2022-05-15 18:10:17 -07:00
Eric Liang
9381dd174e
[docs] Fix broken code example in docstring for DataParallelTrainer args 2022-05-14 20:48:45 -07:00
Yi Cheng
684e395c5d
Revert "Revert "[core] Move reconnection to RPC layer for GCS client."" (#24764)
* Revert "Revert "[core] Move reconnection to RPC layer for GCS client. (#24330)" (#24762)"

This reverts commit 30f370bf1f.
2022-05-14 20:35:40 -07:00
Clark Zinzow
8e8deaeafd
[Datasets] Add example protocol for reading canned in-package example data. (#24800)
Providing easy-access datasets is table stakes for a good Getting Started UX, but even with good in-package data, it can be difficult to make these paths accessible to the user. This PR adds an "example://" protocol that will resolve passed paths directly to our canned in-package example data.
2022-05-14 11:11:24 -07:00
Kai Yang
f5c6c7d28f
[Core] Allow failing new tasks immediately while the actor is restarting (#22818)
Currently, when an actor has `max_restarts` > 0 and has crashed, the actor will enter RESTARTING state and then ALIVE. Imagine this scenario: an online service provides HTTP service and the proxy actor receives requests, forwards them to worker actors, and replies to clients with the execution results from worker actors.

```
                                                        -> Worker A (actor)
                                                       /
                                                      /
HTTP requests -------> Proxy (actor with HTTP server) ---> Worker B (actor)
                                                      \
                                                       \
                                                        -> ...
```

For each HTTP request, the proxy picks one worker (e.g. worker A) based on some algorithm, sends the request to it, and calls `ray.get()` to wait for the result. If for some reason the picked worker crashed, Ray will restart the actor, and `ray.get()` will throw an error. The proxy may pick another worker (e.g. worker B) and re-send the request to it. This is OK.

But new requests keep coming. The proxy may pick worker A again. But because worker A is still in RESTARTING state, it's not ready to serve requests. `ray.get()` on subsequent requests sent to worker A will hang until worker A is back online (ALIVE state). The proxy won't be able to reschedule these requests to another worker because currently there's no way to know if worker A is alive or not before sending a request. We can't say worker A is not alive just based on whether `ray.get()` hangs either.

To solve this issue, we change the semantics of `max_task_retries`.

* When max_task_retries is 0 (which is the default value), if the callee actor is in the RESTARTING state, subsequently submitted tasks will fail immediately with a RayActorError. Users can catch the RayActorError and implement their own fallback strategies to improve service availability and mitigate service outages.
* When max_task_retries is not 0, subsequently submitted tasks will be queued on the caller side and we only send them to the callee when the callee actor is back to the ALIVE state.

TODO

- [x] Add test cases.
- [ ] Update docs.
- [x] API change review.
2022-05-14 10:48:47 +08:00
Clark Zinzow
761cfb9238
[Datasets] Add more example data. (#24795)
This PR adds more example data for ongoing feature guide work. In addition to adding the new datasets, this also puts all example data under examples/data in order to separate it from the example code.
2022-05-13 15:07:49 -07:00
Chen Shen
2be45fed5e
Revert "[dataset] Use polars for sorting (#24523)" (#24781)
This reverts commit c62e00e.

See if reverts this resolve linux://python/ray/tests:test_actor_advanced failure.
2022-05-13 12:09:12 -07:00
Jian Xiao
030b99b544
Add a classic yet small-sized ML dataset for demo/documentation/testing (#24592)
To facilitate easy demo/documentation/testing with realistic, small-sized yet ML-familiar data. Have it as a source file with code will make it self-contained, i.e. after user "pip install" Ray, they are all set to run it.

IRIS is a great fit: super classic ML dataset, simple schema, only 150 rows.
2022-05-13 10:25:44 -07:00
Archit Kulkarni
b0f5073b31
[runtime env] Use asyncio lock to prevent concurrent virtualenv creation (#24564) 2022-05-13 10:59:32 -05:00
Jian Xiao
f02a469d36
Drop python 3.6 from Windows build (#24756)
Fix the wheel build failure
2022-05-13 08:50:10 -07:00
Kai Fricke
06ef672699
[ci/docs] Fix broken linkcheck URL (#24777)
The hyperband blogpost URL is broken, link to other blog post
2022-05-13 15:58:36 +01:00
Chen Shen
30f370bf1f
Revert "[core] Move reconnection to RPC layer for GCS client. (#24330)" (#24762)
This reverts commit c427bc54e7.
2022-05-13 00:07:21 -07:00
Qing Wang
2627c7b5bc
[Core] Use async post instead of PostBlocking for concurrency group executor. (#24293)
Aiming to:
1. addressing the bug about concurrency group, see #19593
2. improving the stability of the ray call latency perf in online applications.

we're proposing using async post instead of `PostBlocking` in threadpool.

Note that since we have already had back pressure in the caller side, I believe this change is safe to merge and it doesn't break any behavior.
2022-05-13 11:30:52 +08:00
Qing Wang
b7cc601024
[Ray Collective] Add prefixes for original key to isolate gloo info in different jobs and different groups. (#24290)
This PR uses the job id and group name as the prefix for storing meta information, aiming to provide the isolate ability for different jobs and different groups.

Before this PR, we can't use 2 groups in 1 Ray cluster, and we can not rerun a collective job once the last one is failed at initializing.
2022-05-13 10:06:16 +08:00
Stephanie Wang
c62e00ed6d
[dataset] Use polars for sorting (#24523)
Polars is significantly faster than the current pyarrow-based sort. This PR uses polars for the internal sort implementation if available. No API changes needed.

On my laptop, this makes sorting 1GB about 2x faster:

without polars

$ python release/nightly_tests/dataset/sort.py --partition-size=1e7 --num-partitions=100
Dataset size: 100 partitions, 0.01GB partition size, 1.0GB total
Finished in 50.23415923118591
...
Stage 2 sort: executed in 38.59s

        Substage 0 sort_map: 100/100 blocks executed
        * Remote wall time: 864.21ms min, 1.94s max, 1.4s mean, 140.39s total
        * Remote cpu time: 634.07ms min, 825.47ms max, 719.87ms mean, 71.99s total
        * Output num rows: 1250000 min, 1250000 max, 1250000 mean, 125000000 total
        * Output size bytes: 10000000 min, 10000000 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

        Substage 1 sort_reduce: 100/100 blocks executed
        * Remote wall time: 125.66ms min, 2.3s max, 1.09s mean, 109.26s total
        * Remote cpu time: 96.17ms min, 1.34s max, 725.43ms mean, 72.54s total
        * Output num rows: 178073 min, 2313038 max, 1250000 mean, 125000000 total
        * Output size bytes: 1446844 min, 18793434 max, 10156250 mean, 1015625046 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

with polars

$ python release/nightly_tests/dataset/sort.py --partition-size=1e7 --num-partitions=100
Dataset size: 100 partitions, 0.01GB partition size, 1.0GB total
Finished in 24.097432136535645
...
Stage 2 sort: executed in 14.02s

        Substage 0 sort_map: 100/100 blocks executed
        * Remote wall time: 165.15ms min, 595.46ms max, 398.01ms mean, 39.8s total
        * Remote cpu time: 349.75ms min, 423.81ms max, 383.29ms mean, 38.33s total
        * Output num rows: 1250000 min, 1250000 max, 1250000 mean, 125000000 total
        * Output size bytes: 10000000 min, 10000000 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

        Substage 1 sort_reduce: 100/100 blocks executed
        * Remote wall time: 21.21ms min, 472.34ms max, 232.1ms mean, 23.21s total
        * Remote cpu time: 29.81ms min, 460.67ms max, 238.1ms mean, 23.81s total
        * Output num rows: 114079 min, 2591410 max, 1250000 mean, 125000000 total
        * Output size bytes: 912632 min, 20731280 max, 10000000 mean, 1000000000 total
        * Tasks per node: 100 min, 100 max, 100 mean; 1 nodes used

Related issue number

Closes #23612.
2022-05-12 18:35:50 -07:00
Jiajun Yao
8f36e32438
Make sure ray.init() works after AutoscalingCluster.start() (#24613)
Some tests relying on AutoScalingCluster are flaky because ray.init() after AutoscalingCluster.start() is not guaranteed to work. Sometimes, it cannot find any running ray instances.
2022-05-12 17:22:07 -07:00
Jian Xiao
c9f31af27f
[CI] Fix Windows wheel build (#24748) 2022-05-12 16:23:50 -07:00