Commit graph

11633 commits

Author SHA1 Message Date
Qing Wang
96924ecfc0
[Java] Add javac.activative dependency for java worker. (#22538)
This PR adds `javac.activative` as Java worker dependency to address the issue that some users need `JAXB`  on >= JDK9.
2022-02-23 16:24:47 +08:00
Lingxuan Zuo
46cb246d75
[Symbols]Exporting openceus for streaming outside (#22526)
Opencenus symobls haven been exported in linux version of libcore_worker_library_java.so, but deleted from ray_exported_symbols.lds, which makes streaming macos test case failed.
This pr add this exporting record and rename *ray*streaming* stuff to *ray*internal* that's a united entry to ray cpp.

Co-authored-by: 林濯 <lingxuan.zlx@antgroup.com>
2022-02-23 16:24:16 +08:00
Xuehai Pan
018ebbf4cb
[RLlib] Issue #21671: Handle callbacks and model metrics for TorchPolicy while using multi-GPU optimizers (#21697) 2022-02-23 08:30:38 +01:00
Jiajun Yao
82443aec63
Remove DEFAULT_SCHEDULING_STRATEGY and SPREAD_SCHEDULING_STRATEGY (#22558) 2022-02-22 21:34:21 -08:00
Stephanie Wang
abf2a70a29
[core] Add task and object reconstruction status to ray memory (#22317)
Improve observability for general objects and lineage reconstruction by adding a "Status" field to `ray memory`. The value of the field can be:
```
  // The task is waiting for its dependencies to be created.
  WAITING_FOR_DEPENDENCIES = 1;
  // All dependencies have been created and the task is scheduled to execute.
  SCHEDULED = 2;
  // The task finished successfully.
  FINISHED = 3;
```

In addition, tasks that failed or that needed to be re-executed due to lineage reconstruction will have a field listing the attempt number. Example output:
```
IP Address    | PID      | Type    | Call Site | Status    | Size     | Reference Type | Object Ref
192.168.4.22  | 279475   | Driver  | (task call) ... | Attempt #2: FINISHED | 10000254.0 B | LOCAL_REFERENCE | c2668a65bda616c1ffffffffffffffffffffffff0100000001000000


```
2022-02-22 21:26:21 -08:00
Eric Liang
9261428004
Drop level of spammy log message (#22576) 2022-02-22 21:23:34 -08:00
shrekris-anyscale
40fa56f40c
[serve] Add JSON schemas for REST API (#22547) 2022-02-22 21:36:42 -06:00
mwtian
9a157dfe82
[GCS-Ray] update doc and error message for GCS-Ray (#22528)
Update documentation to reflect that Ray no longer starts Redis by default.
2022-02-22 17:56:30 -08:00
Eric Liang
12dcec8b38
Fix [Datasets] iter_epochs not iterating using native format 2022-02-22 15:47:16 -08:00
SangBin Cho
36a31cb6fd
[Usage Stats] Implement usage stats report "Turned off by default". (#22249)
This is the second PR to implement usage stats on Ray. Please refer to the file usage_lib.py for more details.

The full specification is here https://docs.google.com/document/d/1ZT-l9YbGHh-iWRUC91jS-ssQ5Qe2UQ43Lsoc1edCalc/edit#heading=h.17dss3b9evbj.

This adds a dashboard module to enable usage stats. **Usage stats report is turned off by default** after this PR. We can control the report (enablement, report period, and URL. Note that URL is strictly for testing) using the env variable.  

## NOTE
This requires us to add `requests` to the default library. `requests` must be okay to be included because
1. it is extremely lightweight. It is implemented only with built-in libs.
2. It is really stable. The project basically claims they are "deprecated", meaning no new features will be added there.

cc @edoakes @richardliaw for the approval

For the HTTP request, I was alternatively considered httpx, but it was not as lightweight as `requests`. So I decided to implement async requests using the thread pool.
2022-02-22 15:32:02 -08:00
Antoni Baum
a1230b9291
[tune] Note TPESampler performance issues in docs (#22545) 2022-02-22 15:29:12 -08:00
Edward Oakes
58e5f0140d
[jobs] Rename JobData -> JobInfo (#22499)
`JobData` could be confused with the actual output data of a job, `JobInfo` makes it more clear that this is status information + metadata.
2022-02-22 16:18:16 -06:00
Yi Cheng
e3051ebf67
[ci] Fix grpcio 1.44 break test_output (#22494)
This PR limit grpc to be <= 1.42. This will fix testoutput.
2022-02-22 13:59:25 -08:00
Dmitri Gekhtman
a402e956a4
[KubeRay] Format autoscaling config based on RayCluster CR (#22348)
Closes #21655. At the start of each autoscaler iteration, we read the Ray Cluster CR from K8s and use it to extract the autoscaling config.
2022-02-22 11:06:37 -08:00
Antoni Baum
4a15c6f8f3
[tune] Preparation for deadline schedulers (#22006) 2022-02-22 11:05:28 -08:00
Matti Picus
dfe4706d73
re-remove unused opencv-python-headless (#22470)
PR #16929 removed opencv-python-headless.
PR #17158 added it back but did not use it. This was noted by [a reviewer](https://github.com/ray-project/ray/pull/17158#issuecomment-982976429) since it breaks python3.9 (no wheel is available for installation).
2022-02-22 09:45:30 -08:00
Gagandeep Singh
4de1886ad5
Unskipped tests in test_object_spilling, test_object_spilling_2, test_get_locations (#22208)
Mostly cluster tests are enabled in this PR in the above mentioned files. Some non-cluster tests are also enabled. All of these pass on my machine without issues.
2022-02-22 09:41:26 -08:00
Sven Mika
6522935291
[RLlib] Slate-Q tf implementation and tests/benchmarks. (#22389) 2022-02-22 09:36:44 +01:00
Jun Gong
2b6a0c71d7
[RLlib] Add a callback for when trainer finishes initialization: on_trainer_init. (#22493) 2022-02-22 08:18:32 +01:00
Steven Morad
d4571741aa
[RLlib] seq_lens should always be torch tensors. (#22398) 2022-02-22 08:15:43 +01:00
JYX
49d7ba3738
[RLlib] Fix typo in vector_env docstring (#22534) 2022-02-22 08:13:50 +01:00
Daniel
308ccfe25c
[RLlib] DD-PPO move train_batch_size==-1 check to __init__ (#22521) 2022-02-21 11:44:12 +01:00
Guyang Song
902243fb03
[runtime env] support raylet sharing fate with agent (#22382)
- Remove the agent restart feature. 
- Raylet shares fate with agent to make the failover logic easier.
Refer to issue https://github.com/ray-project/ray/issues/21695#issuecomment-1032161528
2022-02-21 18:16:21 +08:00
Guyang Song
5783cdb254
[runtime env] runtime env inheritance refactor (#22244)
Runtime Environments is already GA in Ray 1.6.0. The latest doc is [here](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#runtime-environments). And now, we already supported a [inheritance](https://docs.ray.io/en/master/ray-core/handling-dependencies.html#inheritance) behavior as follows (copied from the doc):
- The runtime_env["env_vars"] field will be merged with the runtime_env["env_vars"] field of the parent. This allows for environment variables set in the parent’s runtime environment to be automatically propagated to the child, even if new environment variables are set in the child’s runtime environment.
- Every other field in the runtime_env will be overridden by the child, not merged. For example, if runtime_env["py_modules"] is specified, it will replace the runtime_env["py_modules"] field of the parent.

We think this runtime env merging logic is so complex and confusing to users because users can't know the final runtime env before the jobs are run.

Current PR tries to do a refactor and change the behavior of Runtime Environments inheritance. Here is the new behavior:
- **If there is no runtime env option when we create actor, inherit the parent runtime env.**
- **Otherwise, use the optional runtime env directly and don't do the merging.**

Add a new API named `ray.runtime_env.get_current_runtime_env()` to get the parent runtime env and modify this dict by yourself. Like:
```Actor.options(runtime_env=ray.runtime_env.get_current_runtime_env().update({"X": "Y"}))```
This new API also can be used in ray client.
2022-02-21 18:13:22 +08:00
Gagandeep Singh
3cb85859cd
Unskipped tests for Windows (#21702)
This set of tests passes without issues on Windows for me, so unskipping them here.
2022-02-20 11:48:59 -08:00
Max Pumperla
29d94a2211
[docs] sphinx gallery removal, migrate to ipynb (#22467) 2022-02-19 01:19:07 -08:00
Clark Zinzow
76e8247d4d
[Datasets] Force local metadata resolution when unserializable Partitioning object provided. (#22477) 2022-02-18 21:21:34 -08:00
Amog Kamsetty
04feea4afe
[rllib] Upper bound gym version (#22510)
gym had 0.22 release today which is breaking a lot of the rllib tests and examples. Temporarily pins gym version for now.
2022-02-18 17:39:22 -08:00
Jiajun Yao
6a17653ba7
API stability annotations for ray commands (#22420)
Annotate ray commands that are intended to be public.
2022-02-18 17:13:36 -08:00
Guyang Song
57a94aae12
[runtime env][bugfix] Fix runtime env retry (#22495)
- Bug: `error_message` is not cleared when the retry succeeds. This bug lead to runtime env creation failing.
- Add test case for this.
2022-02-18 17:09:06 -08:00
Archit Kulkarni
8c12e30f11
[Doc] Add actor max restarts default value to fault tolerance doc (#22481) 2022-02-18 17:48:22 -06:00
Jiajun Yao
baa14d695a
Round robin during spread scheduling (#21303)
- Separate spread scheduling and default hydra scheduling (i.e. SpreadScheduling != HybridScheduling(threshold=0)): they are already separated in the API layer and they have the different end goals so it makes sense to separate their implementations and evolve them independently.
- Simple round robin for spread scheduling: this is just a starting implementation, can be optimized later.
- Prefer not to spill back tasks that are waiting for args since the pull is already in progress.
2022-02-18 15:05:35 -08:00
mwtian
5a4c6d2e88
[Core] release GIL when running parallel_memcopy() / memcpy() during serializations (#22492)
While investigating #22161, it is observed GIL is held for an extended amount of time (up to 1000s) with stack trace [1]. It is possible either there are many iterations within `Pickle5Writer.write_to()` calling `ray::parallel_memcopy()`, or a few `ray::parallel_memcopy()` taking a long time (less likely). Either way, `ray::parallel_memcopy()` or `std::memcpy()` should not hold GIL.
2022-02-18 14:11:12 -08:00
Yi Cheng
95256181dd
[1][resource reporting] Remove redis based resource broadcasting. (#22463)
This flag has been turned on by default for almost 4 months. Delete the old code so that when refactoring, we don't need to take care of the legacy code path.
2022-02-18 14:09:37 -08:00
Stephanie Wang
03a5589591
[core] Enable lineage reconstruction in CI (#21519)
Enables lineage reconstruction in all CI and release tests.
2022-02-18 11:04:20 -08:00
Max Pumperla
9482f03134
[docs] RLlib concepts consolidation, user guide, RL conf prep (#22496) 2022-02-18 09:35:20 -08:00
Jun Gong
04effca29c
[RLlib; docs] Update README.rst to fix the broken RLlib logo (#22489) 2022-02-18 18:33:07 +01:00
Archit Kulkarni
df581c584a
[Job] [Dashboard] Add Job Submission data to cluster snapshot (#22225)
The existing Job info in the cluster snapshot uses the old definition of Job, which is a single Ray driver (a single `ray.init()` connection).  

In the new Job Submission protocol, a Job just specifies an entrypoint which can be any shell command.  As such a Job can have zero or multiple Ray drivers.  This means we should add a new snapshot entry corresponding to new jobs.  We'll leave the old snapshot in place for legacy jobs.

- Also fixes `get_all_jobs` by using the appropriate KV namespace, and stripping the job key KV prefix from the job ID.  It wasn't working before.

- This PR also unifies the datatype used by the GET jobs/ endpoint to be the same as the one used by the new jobs cluster snapshot.  For backwards compatibility, the `status` and `message` fields are preserved.
2022-02-18 09:54:37 -06:00
Archit Kulkarni
1f160114a0
[serve] [CI] change serve:test_runtime_env from medium to large (#22474)
This test was timing out occasionally.
2022-02-18 08:50:47 -06:00
ZhuSenlin
3341fae573
[Core] remove unused method GcsResourceManager::UpdateResourceCapacity (#22462)
In the implementation of `GcsResourceManager::UpdateResourceCapacity`, 'cluster_scheduling_resources_'  is modified,  but this method is only used in c++ unit test, it is easy to cause confuse when reading the code. Since this method can be completely replaced by `GcsResourceManager::OnNodeAdd`, just remove it.

Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
2022-02-18 13:35:47 +08:00
Archit Kulkarni
df85d31095
[Serve] Make handle serializable (#22473) 2022-02-17 17:29:44 -08:00
ZhuSenlin
15cccd0286
[Core] Fix null pointer crash when GcsResourceManager::SetAvailableResources (#22459)
* fix null pointer crash when GcsResourceManager::SetAvailableResources

* add warning log when node does not exist

* add unit test

Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
2022-02-17 17:18:30 -08:00
Simon Mo
3e7511e84f
[CI] Disable privileged test (#22484) 2022-02-17 15:34:02 -08:00
Chen Shen
17f589a05d
[Dataset][nighlty-test] use 2 instead of 15 windows for 1.5TB data ingestion #22479 2022-02-17 15:20:39 -08:00
Ian Rodney
c9a4b17f99
[YAMLs] Fix comments about autoscaler round-robining (#22002) 2022-02-17 13:59:05 -08:00
Sven Mika
c58cd90619
[RLlib] Enable Bandits to work in batches mode(s) (vector envs + multiple workers + train_batch_sizes > 1). (#22465) 2022-02-17 22:32:26 +01:00
SangBin Cho
4ecb2afc2c
[State] Add pid to the actor table data. (#22434)
It is requested by users that they'd like to get the pid of actors using ray.state.actors. This PR addresses that.
2022-02-17 06:22:29 -08:00
Avnish Narayan
740def0a13
[RLlib] Put env-checker on critical path. (#22191) 2022-02-17 14:06:14 +01:00
Sven Mika
e03606f0b3
[RLlib] Bandit documentation enhancements. (#22427) 2022-02-17 13:25:50 +01:00
Chen Shen
ab53848dfc
[refactor cluster-task-manage 4/n] refactor cluster_task_manager into distributed and local part (#21660)
This is a working in progress PR that splits cluster_task_manager into local and distributed parts.

For the distributed scheduler (cluster_task_manager_:
/// Schedules a task onto one node of the cluster. The logic is as follows:
/// 1. Queue tasks for scheduling.
/// 2. Pick a node on the cluster which has the available resources to run a
/// task.
/// * Step 2 should occur any time the state of the cluster is
/// changed, or a new task is queued.
/// 3. For tasks that's infeasible, put them into infeasible queue and reports
/// it to gcs, where the auto scaler will be notified and start new node
/// to accommodate the requirement.

For the local task manager:

/// It Manages the lifetime of a task on the local node. It receives request from
/// cluster_task_manager (the distributed scheduler) and does the following
/// steps:
/// 1. Pulling task dependencies, add the task into to_dispatch queue.
/// 2. Once task's dependencies are all pulled locally, the task becomes ready
/// to dispatch.
/// 3. For all tasks that are dispatch-ready, we schedule them by acquiring
/// local resources (including pinning the objects in memory and deduct
/// cpu/gpu and other resources from local resource manager), and start
/// a worker.
/// 4. If task failed to acquire resources in step 3, we will try to
/// spill it to a different remote node.
/// 5. When a worker finishes executing its task(s), the requester will return
/// it and we should release the resources in our view of the node's state.
/// 6. If a task has been waiting for arguments for too long, it will also be
/// spilled back to a different node.
///
2022-02-17 01:14:33 -08:00