Commit graph

360 commits

Author SHA1 Message Date
Jiajun Yao
d7dcb1f938
Replace boost::filesystem with std::filesystem (#27522)
This redos #27319

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-08-04 21:33:51 -07:00
SangBin Cho
afd6597056
Revert "Replace boost::filesystem with std::filesystem (#27338)" (#27483)
This reverts commit c50faa126c.
2022-08-04 02:18:59 -07:00
Jiajun Yao
c50faa126c
Replace boost::filesystem with std::filesystem (#27338)
Redo #27319

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-08-01 17:12:23 -07:00
Jiajun Yao
36d5e5f99d
Revert "Replace boost::filesystem with std::filesystem (#27319)" (#27337)
This reverts commit 8e5c51d7d7.
2022-08-01 13:46:45 -07:00
Jiajun Yao
8e5c51d7d7
Replace boost::filesystem with std::filesystem (#27319)
std::filesystem is shipped with c++17, there is no need to depend on boost for this.

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
2022-08-01 11:44:39 -07:00
clarng
57adde3f7d
memory monitor (#27017)
Signed-off-by: Clarence Ng clarence.wyng@gmail.com

Why are these changes needed?
This PR adds a memory monitor in cpp that runs periodically to check if the node memory usage is above a certain threshold. The caller may provide a callback to the monitor to execute at each interval to determine whether an action should be taken.

This PR is a no-op since the monitor is disabled by default. Another PR based on this will implement the monitor to take action when memory is running low
2022-08-01 10:40:46 -07:00
Yi Cheng
33997da299
[core] Introduce a flag which allows a longer timeout for raylet when GCS restarts. (#26919)
## Why are these changes needed?
When GCS restarts, sometimes, raylet needs a while to reconnect to the GCS, for example, in k8s env, it needs a while to move GSC to the service. This PR try to fix this by allowing a longer timeout for the first ping when GCS restarts.

Once GCS get the first ping, it'll just use the regular timeout instead.
2022-07-25 16:57:19 -07:00
Guyang Song
1949f35901
[runtime env] plugin refactor[4/n]: remove runtime env protobuf (#26522) 2022-07-15 13:56:12 +08:00
Stephanie Wang
6ef26cd8ff
[core] Cancel pending dependency resolution before failing a task (#26267)
Actor tasks are sometimes failed while their dependencies are still being resolved. This can cause hanging or crashes when we resolve the dependencies for a task that has already been canceled. It can lead to a crash from the ref counter when, for the same actor, actor task 1 depends on actor task 2. The sequence is:

    Actor tasks 1 and 2 queued, 1 depends on 2.
    Fail actor task 1. We clear its refs, including its dependency on 2.
    Fail actor task 2. We store an error as its return value. Since task 1 depends on it, we inline the dependency and try to clear task 1's refs again, causing a ref counting error because we already cleared them in step 2.

This PR fixes the issue by canceling dependency resolution for tasks before failing them. This involves some refactoring of the LocalDependencyResolver. Most of the changes are for testing (split out the unit tests for LocalDependencyResolver into their own suite).
Related issue number

Closes #18908.
2022-07-13 14:39:11 -07:00
Jiajun Yao
53d878804a
[Core] Set c++ terminate handler to print stack trace (#26444) 2022-07-12 13:54:20 -07:00
Yi Cheng
39cb1e5f97
[core][1/2] Improve liveness check in GCS (#26405)
CheckAlive in GCS is only for checking GCS's liveness. But we also need to check the liveness for raylet.

In KubeRay, we can check the liveness directly by monitoring the raylet's liveness. But it's not good enough given that raylet's process liveness is not directly related to raylet's liveness.

For example, during a network partition, raylet is not able to connect to GCS. GCS mark raylet as dead. So for the cluster, although raylet process is still alive, it can't be treated as alive because GCS has told all the nodes that it's dead.

So for KubeRay, it also needs to talk with GCS to check whether it's alive or not.

This PR extends the CheckAlive API to include raylet address so that we can query GCS to get the cluster status directly.
2022-07-09 16:32:31 +00:00
Yi Cheng
096c0cd668
[core][gcs] Add storage namespace to redis storage in GCS. (#25994)
To enable one storage be able to be shared by multiple ray clusters, a special prefix is added to isolate the data between clusters: "<EXTERNAL_STORAGE_NAMESPACE>@"

The namespace is given by an os environment: `RAY_external_storage_namespace` when start the head: `RAY_external_storage_namespace=1234 ray start --head`

This flag is very important in HA GCS environment. For example, in ray serve operator, when the operator tries to bring up a new one, it's hard to just start a new db, but it's relatively easy to generate a new cluster id.
Another example is that, the user might only be able to maintain one HA Redis DB, and the namespace enable the user to start multiple ray clusters which share the same db.

This config should be moved to storage config in the future once we build that.
2022-07-03 11:16:37 -07:00
SangBin Cho
8837a4593f
[State Observability] Truncate data when there are too many entries to return (#26124)
## Why are these changes needed?

This PR adds data truncation when there are more than N number of entries. The policy is as follow;

By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.

By default, all internal RPCs truncate data if it's > 10K. 

For distributed sources, we query each source with 10K limit and we apply limit again at the end. 

## Related issue number

Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
2022-06-28 18:33:57 -07:00
Yi Cheng
a1f02f68b7
[core][gcs] Make GCS client working with timeout_ms. (#25975)
In [PR](https://github.com/ray-project/ray/pull/24764) we move the reconnection to GcsRPCClient. In case of a GCS failure, we'll queue the requests and resent them once GCS is back.
This actually breaks request with timeout because  now, the request will be queued and never got a response. This PR fixed it.

For all requests, it'll be stored by the time it's supposed to be timeout. When GCS is down, we'll check the queued requests and make sure if it's timeout, we'll reply immediately with a Timeout error message.
2022-06-22 18:02:29 -07:00
Chen Shen
afb092a03a
[Core] Out of Disk prevention (#25370)
Ray (on K8s) fails silently when running out of disk space.
Today, when running a script that has a large amount of object spilling, if the disk runs out of space then Kubernetes will silently terminate the node. Autoscaling will kick in and replace the dead node. There is no indication that there was a failure due to disk space.
Instead, we should fail tasks with a good error message when the disk is full.

We monitor the disk usage, when node disk usage grows over the predefined capacity (like 90%), we fail new task/actor/object put that allocates new objects.
2022-06-22 12:25:32 -07:00
Larry
679f66eeee
[Core/PG/Schedule 1/2]Optimize the scheduling performance of actors/tasks with PG specified only for gcs schedule (#24677)
## Why are these changes needed?

When  schedule actors on pg, instead of iterating all nodes in the cluster resource, This optimize will directly queries corresponding nodes by looking at pg location index.
This optimization can reduce the complexity of the algorithm from O (N) to o (1),and N is the number of nodes. In particular, the more nodes in large-scale clusters, the better the optimization effect.

**This PR only optimize schedule by gcs, I will submit a PR for raylet scheduling later.**

In ant group, Now we have achieved the optimization in the GCS scheduling mode and obtained the following performance test results.
1、The average time of selecting nodes is reduced from 330us to 30us, and the performance is improved by about 11 times.
2、The total time of creating & executing 12,000 actors ranges from 271 (s) - > 225 (s) on average. Reduce time consumption by 17%.

More detailed solution information is in the issue.

## Related issue number

[Core/PG/Schedule]Optimize the scheduling performance of actors/tasks with PG specified #23881
2022-06-13 15:31:00 -07:00
Yi Cheng
6b38b071e9
Revert "Revert "[core] Remove gcs addr updater in core worker. (#24747)" (#25375)" (#25391)
This reverts commit 49efcab4fe.
2022-06-03 12:26:27 -07:00
SangBin Cho
49efcab4fe
Revert "[core] Remove gcs addr updater in core worker. (#24747)" (#25375)
Turns out https://github.com/ray-project/ray/pull/25342 wasn't the root cause of the ray shutdown flakiness. I realized there's another PR that could affect this test suite. Let's try reverting it and see if things get better.
2022-06-01 15:12:33 -07:00
Yi Cheng
0bc04f263e
[core] Remove gcs addr updater in core worker. (#24747)
Since we are using domain name resolution to get the new address of GCS, gcs addr updator is not necessary any more. This PR removed that.
2022-05-26 23:38:19 -07:00
mwtian
b17cbd825f
[Core] fix prometheus export error and move GCS client heartbeat to a dedicated thread. (#24867)
I have seen errors where the prometheus view data dictionary is changed when iterating over it. So make a copy (which is atomic) before iterating.

Also, use a separate thread for GCS client heartbeat RPC. This avoids the issue of missing heartbeat when raylet is too busy.
2022-05-18 15:04:04 -07:00
Yi Cheng
684e395c5d
Revert "Revert "[core] Move reconnection to RPC layer for GCS client."" (#24764)
* Revert "Revert "[core] Move reconnection to RPC layer for GCS client. (#24330)" (#24762)"

This reverts commit 30f370bf1f.
2022-05-14 20:35:40 -07:00
Chen Shen
30f370bf1f
Revert "[core] Move reconnection to RPC layer for GCS client. (#24330)" (#24762)
This reverts commit c427bc54e7.
2022-05-13 00:07:21 -07:00
Yi Cheng
c427bc54e7
[core] Move reconnection to RPC layer for GCS client. (#24330)
This PR support reconnection of the GCS client in gRPC channel layer. Previously this is implemented in the application layer:

- Health check is in the application layer by starting a new channel.
- Monitor the GCS address change and do resubscribe.
- Always retry the failed request and do reconnection in case of a failure.

However, there are several issues with this approach:

- We need service to discover for GCS address change. 
- Monitor is too heavy since it always creates a channel.
- Reconnection is a blocking call that prevents the code from running.

This new approach moves the reconnection to gRPC layer directly to fix these issues.

- DNS name resolution is done by gRPC so we don't need to write this.
- Health check is done by checking the channel state.
- Queue the failed call and retry once GCS is up so that it's not a blocking call.
2022-05-11 16:27:22 -07:00
Chen Shen
dd527da58b
[GCS][Storage Unification 4/n] Monitor for Gcs storage interface. (#23577)
Add basic monitoring for GCS Storage Interface, including the qps and latency for each operation.
2022-05-09 16:50:39 -07:00
Chong-Li
f3767131cb
[Enable gcs actor scheduler 1/n] Raylet and GCS schedulers share cluster_task_manager (#23829) 2022-05-02 21:45:23 +08:00
mwtian
d5d2ef4249
[Core] Add a utility to check GCS / Ray cluster health (#23382)
* Provide a utility to ping a Ray cluster and verify it has the same Ray version. This is useful to check if a Ray cluster is available at a given address, without connecting to the cluster with the more heavyweight ray.init(). This utility is integrated with ray memory to provide a better error message when the Ray cluster is unavailable. There seem to be user demand for exposing this as an API as well.
* Improve the error message when the address provided to Ray does not contain port.
2022-04-18 09:58:45 -07:00
Kai Fricke
65d9a410f7
[ci] Clean up ci/ directory (refactor ci/travis) (#23866)
Clean up the ci/ directory. This means getting rid of the travis/ path completely and moving the files into sensible subdirectories.

Details:

- Moves everything under ci/travis into subdirectories, e.g. ci/build, ci/lint, etc.
- Minor adjustments to some scripts (variable renames)
- Removes the outdated (unused) asan tests
2022-04-13 18:11:30 +01:00
Jiajun Yao
95714cc281
Node affinity scheduling strategy (#23381)
Instead of relying on the node-ip custom resource for static task-to-node placement, this PR introduces an explicit NodeAffinitySchedulingStrategy with the following benefits:

1. Specify node using id instead of ip since ip may not be unique for each node.
2. Support soft constraint so the task can be tolerant to node failures.

After this PR, the node-ip custom resource can be deprecated.
2022-04-12 21:31:26 -07:00
Yi Cheng
87dc57df26
[3][cleanup][gcs] Remove redis based pubsub. (#23520)
This PR removes redis based pubsub.
2022-03-31 00:13:55 -07:00
ZhuSenlin
e79a63db64
[GCS] [3 / n] Refactor gcs_resource_scheduler to cluster_resource_scheduler #23371
As we (@scv119 @iycheng @raulchen @Chong-Li @WangTaoTheTonic ) discussed offline, the GcsResourceScheduler on the GCS side should be unified to ClusterResourceScheduler.

There is already a big PR( #23268 ) to do this, but in order to make review easy, I will split it to two or mall small PRs.
This is [3/n]:

Move the implementation of all policies from gcs_resource_scheduler to bundle_scheduling_plocy
Delete gcs_resource_scheduler
Refactor gcs_resource_scheduler_test to cluster_resource_scheduler_2_test
BTW: The interface inside ISchedulingPolicy should be refactor in another PR, see the discussion #23323 (comment)

To be clear:

scorer related codes are moved out from gcs_resoruce_scheduler to scorer.h/.cc and no logic changes.
Policy related codes are moved out from gcs_resoruce_scheduler to bundle_scheduling_policy.h/.cc, and a small part of the logic in "GcsResourceScheduler::Schedule" is distributed into each policy.
Some codes inside gcs_placement_group_scheduler.h/.cc are changed to adapt to new data structure (SchedulingResult and SchedulingContext)
2022-03-30 17:39:46 -07:00
jon-chuang
54ddcedd1a
[Core] Chore: move test to right dir #23096 2022-03-30 09:29:38 -07:00
Yi Cheng
781c46ae44
[scheduling][5] Refactor resource syncer. (#23270)
## Why are these changes needed?

This PR refactor the resource syncer to decouple it from GCS and raylet. GCS and raylet will use the same module to sync data. The integration will happen in the next PR.

There are several new introduced components:

* RaySyncer: the place where remote and local information sits. It's a coordinator layer.
* NodeState: keeps track of the local status, similar to NodeSyncConnection.
* NodeSyncConnection: keeps track of the sending and receiving information and make sure not sending the information the remote node knows.

The core protocol is that each node will send {what it has} - {what the target has} to the target.
For example, think about node A <-> B. A will send all A has exclude what B has to B.

Whenever when there is new information (from NodeState or NodeSyncConnection), it'll be passed to RaySyncer broadcast message to broadcast. 

NodeSyncConnection is for the communication layer. It has two implementations Client and Server:

* Server => Client: client will send a long-polling request and server will response every 100ms if there is data to be sent.
* Client => Server: client will check every 100ms to see whether there is new data to be sent. If there is, just use RPC call to send the data.

Here is one example:

```mermaid
flowchart LR;
    A-->B;
    B-->C;
    B-->D;
```

It means A initialize the connection to B and B initialize the connections to C and D

Now C generate a message M:

1. [C] RaySyncer check whether there is new message generated in C and get M
2. [C] RaySyncer will push M to NodeSyncConnection in local component (B)
3. [C] ServerSyncConnection will wait until B send a long polling and send the data to B
4. [B] B received the message from C and push it to local sync connection (C, A, D)
5. [B] ClientSyncConnection of C will not push it to its local queue since it's received by this channel.
6. [B] ClientSyncConnection of D will send this message to D
7. [B] ServerSyncConnection of A will be used to send this message to A (long-polling here)
8. [B] B will update NodeState (local component) with this message M
9. [D] D's pipelines is similar to 5) (with ServerSyncConnection) and 8)
10. [A] A's pipeline is similar to 5) and 8)
2022-03-29 23:52:39 -07:00
Hao Chen
b7d32df8b0
Refactor scheduler data structures (#22854)
This is the first PR to refactor scheduler data structures (See #22850).

Major changes:
- Hid the implementation details in the `ResourceRequest` and `TaskResourceInstnaces` classes, which expose public methods such as algebra operators and comparison operators. 
- Hid the differences between "predefined" and "custom" resources inside these 2 classes. Call sites can simply use the resource ID to access the resource, no matter it is predefined or custom.
- The predefined_resources vector now always has the full length. So no more "resize"s are needed. 
- Removed the `ResourceCapacity` class. Now "total" and "available" resources are stored in separate fields in "NodeResources". 
- Moved helper functions for FixedPoint vectors from "cluster_resource_data.h" to "fixed_point.h"
- "ResourceID" now has static methods to get the resource ids of predefined resources, e.g. "ResourceID::CPU()". 
- Encapsulated unit-instance resource logic to "ResourceID"

Other planned changes that are not included in this PR:
- Rename ResourceRequest to ResourceSet, and move it to its own file.
- Remove the predefined vectors and always use maps.

Co-authored-by: Chong-Li <lc300133@antgroup.com>
2022-03-29 19:44:59 +08:00
Yi Cheng
7de751dbab
[1][core][cleanup] remove enable gcs bootstrap in cpp. (#23518)
This PR remove enable_gcs_bootstrap flag in cpp.
2022-03-28 21:37:24 -07:00
ZhuSenlin
125ef0e5a6
[GCS] integrate cluster_resource_manager into gcs_resource_manager and gcs_resource_scheduler (#23105)
* refactor gcs_resource_manager

* fix lint error

* fix lint error

* fix compile error

* fix test

* fix test

* fix test

* add unit test

* refactor UpdateNodeNormalTaskResources

* fix comment

Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
2022-03-16 16:27:14 -07:00
Chen Shen
5a2ebc281c
[Scheduler] separate scheduler code to its own build target (#23124)
* wip

* comments

* fix build

* fix-test

* fix format
2022-03-14 23:23:58 -07:00
Tao Wang
10c03cb126
Migrating to flat hash map [GCS&util&common] (#22932)
Next move of #19220. This pr replace unordered_map to flat_hash_map in most GCS code and some util & common modules.
The placement group part, which exposes user interfaces in Java/Python, is exclusive as it's a little bit complicated.

The follow-up PRs would be migrating in core worker, placement group and others.
2022-03-11 18:35:06 +09:00
Chen Shen
bc3f7a7684
[scheduling policy 3/n][rfc] Refactor SchedulingPolicy into interface and implementations (#22907)
* scheduling policy

* update

Co-authored-by: Gagandeep Singh <gdp.1807@gmail.com>
2022-03-08 18:47:56 -08:00
Chen Shen
3e3db8e9cd
[scheduler] hide StringIDMap under BaseSchedulingID (#22722)
* add

* address comments
2022-03-01 22:50:53 -08:00
Chen Shen
dfcb0f5de5
[clean up ClusterResourceScheduler 1/n] move IsSchedulable logic into ClusterResourceManager #22711 2022-02-28 20:37:56 -08:00
Lingxuan Zuo
94caac8722
Remove exporting symbols (#22623)
To hidden symbols of thirdparty library, this pull request reuses internal namespace that can be imported by any external native projects without side effects.

Besides, we suggest all of contributors to make sure it'd better use thirdparty library in ray scopes/namspaces and only ray::internal should be exported.

More details in https://github.com/ray-project/ray/pull/22526

Mobius has applied this change in https://github.com/ray-project/mobius/pull/28.

Co-authored-by: 林濯 <lingxuan.zlx@antgroup.com>
2022-02-28 09:41:10 +08:00
Stephanie Wang
634ca9afdb
[core] Cleanup handling for nondeterministic object size during transfer (#22639)
Currently object transfers assume that the object size is fixed. This is a bad assumption during failures, especially with lineage reconstruction enabled and tasks with nondeterministic outputs.

This PR cleans up the handling and hopefully guards against two cases where the object size may change during a transfer:
1. The object manager's size information does not match the object in the local plasma store (due to async notifications). --> the object manager overwrites its own information if it finds that the physical object has a different size.
2. The receiver's created buffer size does not match the sender's object size. --> the receiver destroys the previous buffer and creates a new buffer with the correct size. This might cause some transient errors but eventually object transfer should succeed.

Unfortunately I couldn't trigger this from Python because it depends on some pretty specific timing conditions. However, I did add some unit tests for case 2 (this is the majority of the PR).
2022-02-25 09:39:14 -08:00
mwtian
839bc5019f
Fix building Windows wheels (#22388) (#22391)
This fixes Windows wheel build issue on master and releases/1.11.0 branch. If the issue happens more often we can try to run iwyu.
2022-02-15 15:24:10 -08:00
Lingxuan Zuo
ec62d7f510
[Streaming]Farewell : remove all of streaming related from ray repo. (#21770)
New repo url is https://github.com/ray-project/mobius

Co-authored-by: 林濯 <lingxuzn.zlx@antgroup.com>
2022-01-23 17:53:41 +08:00
Yi Cheng
e4ba51f25b
[core] Add GC for function table (#21509)
In Ray, functions are exported to the function table during runtime. But it's not cleaned up after use. This PR garbage collects the resource when there is no job/detached actor referencing the resource.

Ideally, we should move the function table imports/exports feature to core, so gcs function manager is introduced, and currently, it's for reference counting only.
2022-01-13 18:06:05 -08:00
SangBin Cho
f5fdbeb594
Refactor event tracker out of asio class (#21215)
This refactors the event tracker to be decoupled from the asio class.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2022-01-12 22:43:31 -08:00
Jiajun Yao
aec37d4b60
Add container utils (#21444)
- Add debug_string helper functions for common containers.
- Add map_find_or_die helper function
2022-01-10 15:29:29 -08:00
Jiajun Yao
48a5208645
Refactor ObjectManager wait logic to WaitManager (#21369)
- This PR moves the `ObjectManager::Wait` related logic to a separate WaitManager class.
- Fix the wait hang issue by not  relying on the async object location notification, but checking if wait is complete when the local object is added, at that time the object is guaranteed to be local.
2022-01-06 10:42:31 -05:00
Qing Wang
3c68370fcf
[Core] Cache job_configs instead of ray_namespace. (#21279)
We need to get not only ray_namespace config of a job. In this PR, we cache the job_configs instead of ray_namespaces, so that we can use it for other PR(For example, this PR #21249 needs the num_java_worker_pre_process item).

Also, before this PR, ray_namespaces_ cache will not be cleared, and we clear the cache in this PR.
2022-01-05 17:48:06 -08:00
mwtian
2410ec5ef0
[Core][Dashboard Pubsub 1/n] Allow a channel to have subscribers to a key and to the whole channel concurrently (#20954)
For actor channel, GCS clients subscribe to a single actor but dashboard subscribes to all actors. This change makes supporting this possible.

Most of the added code is in `integration_test.cc`, which tests the publisher and subscriber together.

Also, add the basic support for dashboard reporter pubsub.
2021-12-09 15:00:38 -08:00