Commit graph

6789 commits

Author SHA1 Message Date
Eric Liang
55d039af32
Annotate datasources and add API annotation check script (#24999)
Why are these changes needed?
Add API stability annotations for datasource classes, and add a linter to check all data classes have appropriate annotations.
2022-05-21 15:05:07 -07:00
Kai Fricke
d57ba750f5
[docs/air] Move upload example to docs (#25022) 2022-05-21 12:16:33 -07:00
Yi Cheng
e3f854e34d
[flakey] Disable redis tests for test_plugin_timeout shortly. (#25045)
This test is not running well in Redis mode. Given that the other tests are ok, I'd like to only disable this one instead of revert the whole commit to making sure the other tests don't have regression.

`linux://python/ray/tests:test_runtime_env_plugin::test_plugin_timeout`
2022-05-20 17:31:46 -07:00
ZhuSenlin
bed660b085
[Core] Lazy subscribe to actor's state (#24600)
Now the status of subscribing to Actors in Actor Manager is eager mode, that is to say, when worker A passes List<ActorHandler> as an input parameter to another worker B, worker B will immediately subscribe to the status of all Actors in this list when constructing, even if worker B has not yet used these actors.

Assuming that a graph job has 1000 actors, and each actor has a List of the graph, then this job has nearly 100w subscription relationships. When the job goes offline, the 1000 actor processes will be killed, the redis-server will instantly receive the disconnect event from the 1000 actor processes, each event will trigger 1000 unsubscribexxx operations in the freeClient, causing the redis-server to get stuck.

We suggest to change this eager mode to lazy mode, and only initiate subscription when `SubmitActorTask`, which can reduce many unnecessary subscription relationships.

The microbenchmark  (Left is this PR,  Right is master branch)
![image](https://user-images.githubusercontent.com/2016670/168011321-b64b06a2-20bd-4b35-aa69-0b84e7f4c12e.png)
2022-05-20 15:35:48 -07:00
Chen Shen
8960afa69c
[Core][Python 3.10] fix get_module in the interactive mode. #25032
in python3.10, it fixed a bug that a interactively defined class was tagged with a wrong type during inspection; which now throws OSError. detailed pr python/cpython#27171

we need to handle this case properly in otherwise ray actor definition will throw in interactive mode. please refer to #25026 for repo.
2022-05-20 12:58:15 -07:00
Clark Zinzow
9ea5a8ec4b
Revert "Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets."" (#25031)
Fixes the check ingest utility to handle non-Pandas native batches.
2022-05-20 11:47:29 -07:00
Clark Zinzow
b52f225a4e
[Datasets] Skip flaky pipelining memory release test (#25009)
This pipelining memory release test is flaky; it was skipped in this Polars PR, which was then reverted.
2022-05-20 11:14:22 -07:00
mwtian
916c6796da
Revert "[core] Fix bugs in data locality (#24698)" (#25035)
This reverts commit eaec96d175.
2022-05-20 10:57:25 -07:00
Antoni Baum
a357b7cf95
[tune] File lock for syncing (#24978)
Adds file locking to prevent parallel file system operations to Tune/AIR syncing functions.

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-05-20 17:11:14 +01:00
Kai Fricke
f215c8c988
[tune] Move wandb logging directory into trial logdir (#25020)
Weights and biases creates a wandb directory to collect intermediate logs and artifacts before uploading them. This directory should be in the respective trial directories. This also means we can re-enable auto resuming.
2022-05-20 17:02:42 +01:00
Kai Fricke
fbfb134b8c
Revert "[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets. (#24812)" (#25017)
This reverts commit 841f7c81ff.

Reverts #24812

Broke e.g. ML tests: https://buildkite.com/ray-project/ray-builders-branch/builds/7667#55e7473e-f6a8-4d72-a875-cd68acf8b0c4
2022-05-20 15:37:40 +01:00
Kai Fricke
e76efffec6
[air/docs] Move RL examples to docs (#24962)
Following #24959, this PR moves the RL examples (online/offline/serving) into the Ray ML docs. It also splits the online and offline parts.
2022-05-20 14:55:01 +01:00
Jim Thompson
a2c8fe2101
[tune] FIX: Failure in create_scheduler() with pb2 scheduler (#24897)
When `create_scheduler("pb2", ....)` is run a `TuneError` exception is raised.  See referenced issue below for details.

In addition to the fix, introduced a new test (`ray/tune/tests/test_api.py::ShimCreationTest.testCreateAllSchedulers`) to confirm that `tune.create_scheduler()` will work with all documented schedulers.  

Note: `tesCreateAllTestSchedulers` is a superset of what is covered in `testCreateScheduer`.  It may be reasonable to retire the later test.
2022-05-20 12:47:38 +01:00
Guyang Song
99d25d4d4e
[Doc] Fix ray core doc (#25006) 2022-05-20 14:51:59 +08:00
Clark Zinzow
841f7c81ff
[Datasets] [Tensor Story - 1/2] Automatically provide tensor views to UDFs and infer tensor blocks for pure-tensor datasets. (#24812)
This PR makes several improvements to the Datasets' tensor story. See the issues for each item for more details.

- Automatically infer tensor blocks (single-column tables representing a single tensor) when returning NumPy ndarrays from map_batches(), map(), and flat_map().
- Automatically infer tensor columns when building tabular blocks in general.
- Fixes shuffling and sorting for tensor columns

This should improve the UX/efficiency of the following:

- Working with pure-tensor datasets in general.
- Mapping tensor UDFs over pure-tensor, a better foundation for tensor-native preprocessing for end-users and AIR.
2022-05-19 22:40:04 -07:00
Yi Cheng
8ec558dcb9
[core] Reenable GCS test with redis as backend. (#23506)
Since ray supports Redis as a storage backend, we should ensure the code path with Redis as storage is still being covered e2e.

The tests don't run for a while after we switch to memory mode by default. This PR tries to fix this and make it run with every commit.

In the future, if we support more and more storage backends, this should be revised to be more efficient and selective. But now I think the cost should be ok.

This PR is part of GCS HA testing-related work.
2022-05-19 21:46:55 -07:00
Jian Xiao
401db466bb
Revamp the Datasets API docstrings (#24949) 2022-05-19 20:26:39 -07:00
Guyang Song
eb2692cb32
[runtime env] runtime env inheritance refactor (#24538)
* [runtime env] runtime env inheritance refactor (#22244)

Runtime Environments is already GA in Ray 1.6.0. The latest doc is [here](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#runtime-environments). And now, we already supported a [inheritance](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#inheritance) behavior as follows (copied from the doc):
- The runtime_env["env_vars"] field will be merged with the runtime_env["env_vars"] field of the parent. This allows for environment variables set in the parent’s runtime environment to be automatically propagated to the child, even if new environment variables are set in the child’s runtime environment.
- Every other field in the runtime_env will be overridden by the child, not merged. For example, if runtime_env["py_modules"] is specified, it will replace the runtime_env["py_modules"] field of the parent.

We think this runtime env merging logic is so complex and confusing to users because users can't know the final runtime env before the jobs are run.

Current PR tries to do a refactor and change the behavior of Runtime Environments inheritance. Here is the new behavior:
- **If there is no runtime env option when we create actor, inherit the parent runtime env.**
- **Otherwise, use the optional runtime env directly and don't do the merging.**

Add a new API named `ray.runtime_env.get_current_runtime_env()` to get the parent runtime env and modify this dict by yourself. Like:
```Actor.options(runtime_env=ray.runtime_env.get_current_runtime_env().update({"X": "Y"}))```
This new API also can be used in ray client.
2022-05-20 10:53:54 +08:00
SangBin Cho
d89c8aa9f9
[Core] Add more accurate worker exit (#24468)
This PR adds precise reason details regarding worker failures. All information is available either by 
- ray list workers
- exceptions from actor failures.

Here's an example when the actor is killed by a SIGKILL (e.g., OOM killer)
```
RayActorError: The actor died unexpectedly before finishing this task.
	class_name: G
	actor_id: e818d2f0521a334daf03540701000000
	pid: 61251
	namespace: 674a49b2-5b9b-4fcc-b6e1-5a1d4b9400d2
	ip: 127.0.0.1
The actor is dead because its worker process has died. Worker exit type: UNEXPECTED_SYSTEM_EXIT Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
```

## Design
Worker failures are reported by Raylet from 2 paths.
(1) When the core worker calls `Disconnect`.
(2) When the worker is unexpectedly killed, the socket is closed, raylet reports the worker failures.

The PR ensures all worker failures are reported through Disconnect while it includes more detailed information to its metadata.

## Exit types
Previously, the worker exit types are arbitrary and not correctly categorized. This PR reduces the number of worker exit types while it includes details of each exit type so that users can easily figure out the root cause of worker crashes. 

### Status quo
- SYSTEM ERROR EXIT
    - Failure from the connection (core worker dead)
    - Unexpected exception or exit with exit_code !=0 on core worker
    - Direct call failure
- INTENDED EXIT
    - Shutdown driver
    - Exit_actor
    - exit(0)
    - Actor kill request
    - Task cancel request
- UNUSED_RESOURCE_REMOVED
     - Upon GCS restart, it kills bundles that are not registered to GCS to synchronize the state
- PG_REMOVED
    - When pg is removed, all workers fate share
- CREATION_TASK (INIT ERROR)
    - When actor init has an error
- IDLE
    - When worker is idle and num workers > soft limit (by default num cpus)
- NODE DIED
    - Only can detect when the node of the owner is dead (need improvement)

### New proposal
Remove unnecessary states and add “details” field. We can categorize failures by 4 types

- UNEXPECTED_SYSTEM_ERROR_EXIT
     - When the worker is crashed unexpectedly
    - Failure from the connection (core worker dead)
    - Unexpected exception or exit with exit_code !=0 on core worker
    - Node died
    - Direct call failure
- INTENDED_USER_EXIT. 
    - When the worker is requested to be killed by users. No workflow required. Just correctly store the state.
    - Shutdown driver
    - Exit_actor
    - exit(0)
    - Actor kill request
    - Task cancel request
- INTENDED_SYSTEM_EXIT
    - When the worker is requested to be killed by system (without explicit user request)
    - Unused resource removed
    - Pg removed
    - Idle
- ACTOR_INIT_FAILURE (CREATION_TASK_FAILED)
     - When the actor init is failed, we fate share the process with the actor. 
     - Actor init failed

## Limitation (Follow up)
Worker failures are not reported under following circumstances
- Worker is failed before it registers its information to GCS (it is usually from critical system bug, and extremely uncommon).
- Node is failed. In this case, we should track Node ID -> Worker ID mapping at GCS and when the node is failed, we should record worker metadata. 

I will create issues to track these problems.
2022-05-19 19:48:52 -07:00
Yi Cheng
0212f5e095
[flakey] Deflakey test_gcs_fault_tollerance.py (#24981)
There is a race condition where the failure message is sent but resubscription hasn't been established. In this case, we'll lose this message.

This PR fixes the test case only by making sure resubscription is done before killing the worker.

We are not going to fix this issue for the first version of GCS HA because:
1. Right now, only failed worker will be reported and it'll be used to eagerly kill the unnecessary workers. From correctness, it's ok.
2. worker/task support is not P0 for GCS HA (Ray Serve). Actor doesn't have this issue since the subscription for the actor is by actor id and raylet will fetch the actor status when resubscribing.

Things are not working:
- we won't kill workers which are not useful anymore. So for a very extreme case (the worker hangs), it'll have a resource leak.
2022-05-19 18:09:39 -07:00
Stephanie Wang
eaec96d175
[core] Fix bugs in data locality (#24698)
This fixes two bugs in data locality:

    When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
    The locality policy did not take into account spilled locations.

Added C++ unit tests and Python tests for the above.
Related issue number

Fixes #24267.
2022-05-19 17:36:11 -07:00
Eric Liang
995309f9a3
[docs] Add AIR data ingest docs (part 1-- bulk loading only) (#24799) 2022-05-19 14:25:47 -07:00
Edward Oakes
737d16328c
[serve] Use soft constraint for placing controller on the head node (#24934)
Previously we pinned the controller on the head node because it was the SPOF, now with GCS HA work that will no longer be the case so it should be able to be run on any node. We still prefer the head node for non-HA cases.
2022-05-19 16:00:27 -05:00
mwtian
e142bb3874
[Pubsub] raise exception when publishing fails (#24951)
It seems `_InactiveRpcError` is not saved as the last exception. Raise an explicit error when publishing fails after retries.

For log and error publishing, dropping messages should be tolerable so log the exception instead.
2022-05-19 12:18:06 -07:00
Amog Kamsetty
f7d75c7a07
[AIR] Support array output from Torch model (#24902)
The previous implementation of TorchPredictor failed when the model outputted an array for each sample (for example outputting logits). This is because Pandas Dataframes cannot hold numpy arrays of more than 1 element. We have to convert the outermost numpy array returned by the model to a list so that it be can stored in a Pandas Dataframe.

The newly added test fails under the old implementation.
2022-05-19 12:00:19 -07:00
Jiajun Yao
c3045e5295
Better message for connection to the head node when it's listening on localhost. (#24945)
For mac and windows, we default to start the head node with 127.0.0.1 to avoid security pop-ups and assume people won't use cluster mode a lot. However, when people do want to create a mac/windows cluster, our connection message is confusing since we cannot connect to the head node that's listening on 127.0.0.1. This PR tells people what to do in this case (e.g. use node-ip-address).
2022-05-19 09:46:34 -07:00
Antoni Baum
a25235a2c2
[tune] Fast path for sync_dir_between_nodes (#24958)
This PR adds a fast path for `sync_dir_between_nodes` that gets triggered if both source IP and target IP are the same. It uses simple `shutil` operations instead of packing and unpacking to improve performance.
2022-05-19 15:38:55 +01:00
Kai Fricke
9a8c8f4889
[air/docs] Move some examples from ml/examples to docs (#24959)
This moves the basic LightGBM, Sklearn, and XGBoost examples from the examples/ folder to the docs. We keep a symlink in the examples folder.
2022-05-19 14:01:49 +01:00
mwtian
502c3e132d
Revert "[Core] allow using grpcio > 1.44.0 (#23722)" (#24935)
This reverts commit b02029b29f.
2022-05-18 18:16:39 -07:00
Balaji Veeramani
ebe2929d4c
[Datasets] Add example of using map_batches to filter (#24202)
The documentation says 

> Consider using .map_batches() for better performance (you can implement filter by dropping records).

but there aren't any examples of how to do so.
2022-05-18 16:28:15 -07:00
mwtian
b17cbd825f
[Core] fix prometheus export error and move GCS client heartbeat to a dedicated thread. (#24867)
I have seen errors where the prometheus view data dictionary is changed when iterating over it. So make a copy (which is atomic) before iterating.

Also, use a separate thread for GCS client heartbeat RPC. This avoids the issue of missing heartbeat when raylet is too busy.
2022-05-18 15:04:04 -07:00
SangBin Cho
2fa7a00588
[State Observability] Support output formatting (#24847)
This PR supports various output formatting. By default, we support yaml format. But this can be changed depending on the UX research we will conduct in the future.

1cb0f4b51a5799e0360a66db01000000:
  actor_id: 1cb0f4b51a5799e0360a66db01000000
  class_name: A
  state: ALIVE
f90ba34fa27f79a808b4b5aa01000000:
  actor_id: f90ba34fa27f79a808b4b5aa01000000
  class_name: A
  state: ALIVE
Table format is not supported yet. We will support this once we enhance the API output (which I will create an initial API review soon).
2022-05-18 15:00:40 -07:00
Dmitri Gekhtman
448f5273bc
[KubeRay][Autoscaler] Enable launching nodes in the main thread. (#24718)
This PR adds a flag to enable creating nodes in the main thread in the autoscaler. The flag is turned on for the KubeRay node provider. KubeRay already uses a flag to disable node updater threads -- with the changes from this PR, the autoscaler becomes single-threaded when launching on KubeRay.
2022-05-18 14:37:19 -07:00
Jiajun Yao
5128029644
Fix pull manager deadlock due to object reconstruction (#24791)
When an object is under reconstruction, pull manager keeps the bundle request active with no timeout, which may block the next bundle request that's needed for the object reconstruction. As a result, we have deadlock.

For example, task 1 takes object A as argument and returns object B, task 2 takes object B as argument. When we run task 2, pull manager will add B to the queue and then B is lost. In this case, task 1 is re-submitted and A is added the the pull manager queue after B (assuming both tasks are scheduled to the same node). Due to limited available object store memory, A cannot be activated until B is pulled but B cannot be pulled until A is pulled and B is reconstructed.

The solution is that if an active pull request has pending-creation objects, pull manager will deactivates it until creation is done. This way, we will free object store memory occupied by the current active pull request so that next requests can proceed and potentially unblock the object creation.
2022-05-18 13:43:23 -07:00
SangBin Cho
f228245520
[Placement group] Update the old placement group API usage to the new scheduling_strategy based API (#24544)
Documentation should use the new API, not the old one that will be deprecated
2022-05-18 09:41:51 -07:00
Qing Wang
eb29895dbb
[Core] Remove multiple core workers in one process 1/n. (#24147)
This is the 1st PR to remove the code path of multiple core workers in one process. This PR is aiming to remove the flags and APIs related to `num_workers`.
After this PR checking in, we needn't to consider the multiple core workers any longer.

The further following PRs are related to the deeper logic refactor, like eliminating the gap between core worker and core worker process,  removing the logic related to multiple workers from workerpool, gcs and etc.

**BREAK CHANGE**
This PR removes these APIs:
- Ray.wrapRunnable();
- Ray.wrapCallable();
- Ray.setAsyncContext();
- Ray.getAsyncContext();

And the following APIs are not allowed to invoke in a user-created thread in local mode:
- Ray.getRuntimeContext().getCurrentActorId();
- Ray.getRuntimeContext().getCurrentTaskId()

Note that this PR shouldn't be merged to 1.x.
2022-05-19 00:36:22 +08:00
Antoni Baum
1d5e6d908d
[AIR] HuggingFace Text Classification example (#24402) 2022-05-18 09:35:12 -07:00
Kai Fricke
c3d54de9ee
[ci] Fix runtime env tests (#24912)
This fixes failing tests from #24894
2022-05-18 11:51:07 +01:00
Amog Kamsetty
d73572dde3
[AIR] Add trainer_resources as a valid scaling config key for all Trainers
trainer_resources should be a valid scaling config key for all Trainers.
2022-05-18 11:15:53 +01:00
Simon Mo
9b2086c726
[Serve] Alias ray.serve.dag.InputNode (#24630) 2022-05-17 22:35:51 -07:00
Simon Mo
c3ac6fcf3f
Bump Ray Version from 2.0.0.dev0 to 3.0.0.dev0 (#24894) 2022-05-17 19:31:05 -07:00
Clark Zinzow
68d4dd3a8b
[Datasets] Add explicit resource allocation option via a top-level scheduling strategy (#24438)
Instead of letting Datasets implicitly use cluster resources in the margins of explicit allocations of other libraries, such as Tune, Datasets should provide an option for explicitly allocating resources for a Datasets workload for users that want to box Datasets in. This PR adds such an explicit resource allocation option, via exposing a top-level scheduling strategy on the DatasetContext with which a placement group can be given.
2022-05-17 15:01:15 -07:00
Jian Xiao
f86d5461c6
[Datasets] [CI] fix CI of dataset test (#24883)
CI test is broken by f61caa3. This PR fixes it.
2022-05-17 14:00:01 -07:00
SangBin Cho
6d978ab10e
[Core/Metrics] Accurately record used memory. (#24648)
Why are these changes needed?
used_memory_ includes the fallback allocation, so we should subtract it here to calculate the exact available memory.
2022-05-17 13:21:24 -07:00
Clark Zinzow
b1acf69a8a
Map progress bar title; pretty repr for rows. (#24672) 2022-05-17 12:24:36 -07:00
Kai Fricke
01af9c8c77
[tune] Fix has_resources_for_trial, leading to trials stuck in PENDING mode (#24878)
Tune resource bookkeeping was broken. Specifically, this is what happened in the repro provided in #24259:

- Only one PG can be scheduled per time
- We staged resources for trial 1
- We run trial 1
- We stage resources for trial 2
- We pause trial 1, caching the placement group
- This removes the staged PG for trial 2 (as we return the PG for trial 1)
- However, now we `reconcile_placement_groups`, which re-stages a PG
- both trial 1 and trial 2 are now not in `RayTrialExecutor._staged_trials`
- A staging future is still there because of the reconciliation

This PR fixes this problem in two ways. `has_resources_per_trial` will check a) also for staging futures for the specific trial, and b) will also consider cached placement groups.

Generally, resource management in Tune is convoluted and hard to debug, and several classes share bookkeeping responsibilities (runner, executor, pg manager). We should refactor this.
2022-05-17 18:21:45 +01:00
Robert
f61caa349c
Implement random_sample() (#24492) 2022-05-17 09:47:53 -07:00
Amog Kamsetty
fb832c3b1f
[AIR] Don't use df.transform for BatchMapper (#24872)
`df.transform` has undefined behavior when the passed in function mutates the dataframe, as mentioned in the pandas docs. This is because I believe the implementation iterates through slices of the dataframe and passes these slices to the provided function. This "gotcha" exposes an implementation to users who are using `BatchMapper`.

It's pretty common to have preprocessors that mutates the dataframe, for example our own test does the following 
```
    def add_and_modify_udf(df: "pd.DataFrame"):
        df["new_col"] = df["old_column"] + 1
        df["to_be_modified"] *= 2
        return df
```

so instead of using `df.transform`, we instead do `self.fn(df)`. As the `df` is the output of Ray Datasets `iter_batches`, the provided function can safely mutate the dataset.
2022-05-17 16:49:17 +01:00
xwjiang2010
5c8361a92e
[air] Update to use more verbose default config for trainers. (#24850)
Internal user feedback showing that more detailed logging is preferred:
https://anyscaleteam.slack.com/archives/C030DEV6QLU/p1652481961472729
2022-05-17 16:21:11 +01:00
shrekris-anyscale
3a2bd16eca
[Serve] Add deployment graph import_path and runtime_env to ServeApplicationSchema (#24814)
A newly planned version of the Serve schema (used in the REST API and CLI) requires the user to pass in their deployment graph's`import_path` and optionally a runtime_env containing that graph. This new schema can then pick up any `init_args` and `init_kwargs` values directly from the graph, instead of requiring them to be serialized and passed explicitly into the REST request.

This change:
* Adds the `import_path` and `runtime_env` fields to the `ServeApplicationSchema`.
* Updates or disables outdated unit tests.

Follow-up changes should:
* Update the status schemas (i.e. `DeploymentStatusSchema` and `ServeApplicationStatusSchema`).
* Remove deployment-level `import_path`s.
* Process the new `import_path` and `runtime_env` fields instead of silently ignoring them.
    * Remove `init_args` and `init_kwargs` from `DeploymentSchema` afterwards.

Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
2022-05-17 09:51:06 -05:00