Commit graph

5935 commits

Author SHA1 Message Date
Antoni Baum
dad6ac2f0a
[tune] Move _head_bundle_is_empty after conversion (#21039) 2021-12-14 17:45:07 +01:00
SangBin Cho
7baf62386a
[Core] Shorten the GCS dead detection to 60 seconds instead of 10 minutes. (#20900)
Currently, when the GCS RPC failed with gRPC unavailable error because the GCS is dead, it will retry forever. 

b3a9d4d87d/src/ray/rpc/gcs_server/gcs_rpc_client.h (L57)

And it takes about 10 minutes to detect the GCS server failure, meaning if GCS is dead, users will notice in 10 minutes.

This can easily cause confusion that the cluster is hanging (since users are not that patient). Also, since GCS is not fault tolerant in OSS now, 10 minutes are too long timeout to detect GCS death.

This PR changes the value to 60 seconds, which I believe is much more reasonable (since this is the same value as our blocking RPC call timeout).
2021-12-14 07:50:45 -08:00
SangBin Cho
8a943e8081
[Core] Fix log monitor concurrency issue (#20987)
This fixes the bug where empty line is printed to the driver when multi threads are used.

e.g.,

```
2021-12-12 23:20:06,876	INFO worker.py:853 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379
(TESTER pid=12344) 
(TESTER pid=12348) 
```

## How does the current log work?
After actor initialization method is done, it prints ({repr(actor)}) to the stdout and stderr, which will be read by log monitor. Log monitor now can parse the actor repr and starts publishing log messages to the driver with the repr name.

## Problem
If actor init method contains a new background thread, when we call print({repr(actor)}), it seems  flush() is not happening atomically. Based on my observation, it seems to flush the repr(actor) first, and then we flush the end="\n" (the default end parameter of print function) after. 

Since log monitor never closes the file descriptor, it is possible it reads the log line before the end separator "\n" is flushed. That says, this sort of scenario can happen.

Expected
- `"repr(actor)\n"` is written to the log file. (which includes the default print end separator `"\n"`).
- Log monitor reads `repr(actor)\n` and publishes it.

Edge case
- `"repr(actor)"` is written to the log
-  Log monitor publishes `repr(actor)`.
- `"\n"` is written to the log (end separator).
- Log monitor publishes `"\n"`.

Note that this is only happening when we print the actor repr "after" actor initialization. I think since new thread is running in the background, GIL comes in, and it creates the gap between `flush(repr(actor))` and `flush("\n")`, which causes the issue.

I verified the is fixed when I add the separator ("\n") directly to the string, and make the end separator as an  empty string. 

Alternative fix is to file lock the log file whenever log monitor reads / core worker writes, but it seems to be too heavy solution compared to its benefit.
2021-12-14 06:32:30 -08:00
SangBin Cho
e1adf2716f
[Core] Allow to configure dashboard agent grpc port. (#20936)
Currently, there's no way to specify grpc port for dashboard agents (only http ports are allowed to be configured). This PR allows users to do that.

This also add more features; 
- Format the port conflict error message better. I found the default worker ports are from 10002-19999, which spams the terminal. It is now formatted as 10002-19999.
- Add dashboard http port to the port conflict list.
2021-12-14 06:31:22 -08:00
Yi Cheng
613a7cc61d
[flaky] deflaky test_multi_node_3 (#21069)
`test_multi_node_3` failed because we kill the raylet before the cluster is up which leads the raylet to become a zombie process. This fix wait until the cluster up and kill it.
2021-12-14 00:17:01 -08:00
WanXing Wang
72bd2d7e09
[Core] Support back pressure for actor tasks. (#20894)
Resubmit the PR https://github.com/ray-project/ray/pull/19936

I've figure out that the test case `//rllib:tests/test_gpus::test_gpus_in_local_mode` failed due to deadlock in local mode.
In local mode, if the user code submits another task during the executing of current task, the `CoreWorker::actor_task_mutex_` may cause deadlock.
The solution is quite simple, release the lock before executing task in local mode.

In the commit 7c2f61c76c:
1. Release the lock in local mode to fix the bug. @scv119 
2. `test_local_mode_deadlock` added to cover the case. @rkooo567 
3. Left a trivial change in `rllib/tests/test_gpus.py` to make the `RAY_CI_RLLIB_DIRECTLY_AFFECTED ` to take effect.
2021-12-13 23:56:07 -08:00
Kai Yang
f5dfe6c158
[Dataset] [DataFrame 1/n] Refactor table block structure to support potentially more block formats (#20721)
This is the first PR of a series of Ray Dataset and Pandas integration improvements.

This PR refactors `ArrowRow`, `ArrowBlockBuilder`, `ArrowBlockAccessor` by extracting base classes `TableRow`, `TableBlockBuilder`, `TableBlockAccessor`, which can then be inherited for pandas DataFrame support in the next PR.
2021-12-13 22:34:59 -08:00
iasoon
33059cff3d
[serve] support not exposing deployments over http (#21042) 2021-12-13 09:43:55 -08:00
Balaji Veeramani
ce9ddf659b
[Train] Convert TrainingResult to dataclass (#20952) 2021-12-13 09:07:52 -08:00
xwjiang2010
f395b6310f
[tune] Show the name of training func, instead of just ImplicitFunction. (#21029) 2021-12-13 16:28:44 +00:00
Shantanu
b743f514d7
[core] do not silently re-enable gc (#20864)
In our code, we wish to have control over when GC runs. We do this by `gc.disable()` and then manually triggering GC at moments that work for us. This does not work if Ray re-enables GC.

Co-authored-by: hauntsaninja <>
2021-12-13 06:03:03 -08:00
Matti Picus
6c6c76c3f0
Starting workers map (#20986)
PR #19014 introduced the idea of a StartupToken to uniquely identify a worker via a counter. This PR:
- returns the Process and the StartupToken from StartWorkerProcess (previously only Process was returned)
- Change the starting_workers_to_tasks map to index via the StartupToken, which seems to fix the windows failures.
- Unskip the windows tests in test_basic_2.py
It seems once a fix to PR #18167 goes in, the starting_workers_to_tasks map will be removed, which should remove the need for the changes to StartWorkerProcess made in this PR.
2021-12-12 19:28:53 -08:00
Yi Cheng
f4e6623522
Revert "Revert "[core] Ensure failed to register worker is killed and print better log"" (#21028)
Reverts ray-project/ray#21023
Revert this one since 7fc9a9c227 has fixed the issue
2021-12-11 20:49:47 -08:00
Eric Liang
6f93ea437e
Remove the flaky test tag (#21006) 2021-12-11 01:03:17 -08:00
Jiajun Yao
f04ee71dc7
Fix driver lease request infinite loop when local raylet dies (#20859)
Currently if local lease request fails due to raylet death, direct_task_transport.cc will retry forever for driver.

With this PR, we treat grpc unavailable as non-retryable error (the assumption is that local grpc is always reliable and grpc unavailable error indicates that server is dead) and will just fail the task.

Note: this PR doesn't try to address a bigger problem: don't crash driver when local raylet dies. We have multiple places in the code that assumes the local raylet never fail and have CHECK_STATUS_OK for that. All these places need to be changed so we can properly propagate failures to the user.
2021-12-10 18:02:59 -08:00
Stephanie Wang
3a5dd9a10b
[core] Pin object if it already exists (#20447)
A worker can crash right after putting its return values into the object store. Then, the owner will receive the worker crashed error, but the return objects will still be in the remote object store. Later, if the task is retried, the worker will crash on [this line](https://github.com/ray-project/ray/blob/master/src/ray/core_worker/transport/direct_actor_transport.cc#L105) because the object already exists.

Another way this can happen is if a task has multiple return values, and one of those return values is transferred to another node. If the task is later re-executed on that node, the task will fail because of the same error.

This PR fixes the crash so that:
1. If an object already exists, we try to pin that copy. Ideally, we should destroy the old copy and create the new one to make sure that metadata like the owner address is in sync, but this is pretty complicated to do right now.
2. If the pinning fails, we store an OBJECT_LOST error to throw to the application.
3. On the raylet, we check whether we already have the object pinned, and only subscribe to the owner's eviction message if the object is not pinned.
4. Also fixes bugs in the analogous case for `ray.put` (previously this would hang, now the application will receive an error if a `ray.put` object already exists).
2021-12-10 15:56:43 -08:00
Yi Cheng
6280bc4391
Revert "[core] Ensure failed to register worker is killed and print better log" (#21023)
`linux://python/ray/tests:test_runtime_env_complicated` looks flaky after this pr.
Reverts ray-project/ray#20964
2021-12-10 14:57:32 -08:00
architkulkarni
7fc9a9c227
[CI] bump tasks_finish_quickly timeout from 1s to 2s (#21015) 2021-12-10 14:16:12 -08:00
Chris K. W
27665fdf29
[client][test] skip test_valid_actor_state_2 (#20995)
* skip test_valid_actor_state_2

* rerun
2021-12-10 10:58:42 -08:00
mwtian
d751a242d8
[Core][Dashboard Pubsub 2/n] Add resource reporter and actor to Python GCS pubsub (#21001)
Dashboard contains resource reporter and actor subscribers. Dashboard agent has resource report publisher. So GCS pubsub needs to support these channel types.

Also refactor GCS AIO subscribers to have each subscriber per channel. This matches the API of GCS sync subscribers, and make subscribing with multiple channels easier.
2021-12-09 23:10:10 -08:00
Jiajun Yao
5b21bc5c93
[BUILD] Use bazel-skylib rule to check bazel version (#20990)
Bazel has a rule for enforcing the version so we can just reuse that.

This redundant bazel version check logic in setup.py is also causing issue when building conda package, because conda has its own version of bazel and it doesn't support `--version`.
2021-12-09 15:25:22 -08:00
SangBin Cho
05a302b468
[Internal Observability] [Part 3] Support debug state metrics on all components. (#20957)
This PR adds RecordMetrics and DebugString to all raylet components. 

Some of methods are probably empty now. They are going to be supported in the next PR
2021-12-09 14:24:15 -08:00
Chen Shen
6a274dfd76
CI][Chaos-test] chaos test now can set max-nodes-to-kill #20962 2021-12-09 13:41:46 -08:00
Yi Cheng
83c639ea76
[core] Ensure failed to register worker is killed and print better log (#20964)
Before this PR, then raylet notices there is something wrong with the worker starting, it'll start a new worker but not kill the old one. If the old one is hanging, it'll lead to resource waste.
This PR killed the failed worker if it's still alive and also print useful logs
2021-12-09 12:37:39 -08:00
kk-55
9acf2f954d
[RLlib] Example containing a proposal for computing an adapted (time-dependent) GAE used by the PPO algorithm (via callback on_postprocess_trajectory) (#20850) 2021-12-09 14:48:56 +01:00
Jiajun Yao
655cc584a9
[Scheduler] Support per task/actor SpreadSchedulingStrategy (#20972)
This PR adds per task/actor SpreadSchedulingStrategy which will try to spread the tasks on a best effort basis.
2021-12-08 22:22:07 -08:00
Eric Liang
22ccc6b300
Initial stats framework for datasets (#20867)
This adds an initial Dataset.stats() framework for debugging dataset performance. At a high level, execution stats for tasks (e.g., CPU time) are attached to block metadata objects. Datasets have stats objects that hold references to these stats and parent dataset stats (this avoids stats holding references to parent datasets, allowing them to be gc'ed). Similarly, DatasetPipelines hold stats from recently computed datasets.

Currently only basic ops like map / map_batches are instrumented. TODO placeholders are left for future PRs.
2021-12-08 16:13:57 -08:00
SangBin Cho
5298a9046c
[Internal Observability] [Part 1] Centralize existing metrics to metric_defs.h (#20728)
This PR centralizes all existing metrics to `metric_defs.h`. 

Previously, each file relies on implicit import of metric_def.h within the stats module. After this PR we only precisely import `metric_defs.h` for each file.
2021-12-08 14:06:05 -08:00
Antoni Baum
4c47f56b61
[tune] Add random state to BasicVariantGenerator (#20926)
This PR adds an ability to set a random seed/numpy random generator in BasicVariantGenerator, allowing for reproducibility across separate runs. All the changes are fully backwards compatible.

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2021-12-08 20:15:53 +00:00
architkulkarni
5593819135
Revert "Revert "[runtime env] Allow working_dir and py_module to be Path type"" (#20853) 2021-12-08 11:17:19 -08:00
architkulkarni
78cd377775
[Serve] Bump test_cluster from small to medium (#20942) 2021-12-08 09:58:27 -08:00
Jiajun Yao
5b168a1515
[Scheduler] Support per task/actor PlacementGroupSchedulingStrategy (#20507)
This PR adds per task/actor scheduling strategy and currently the only strategy are PlacementGroupSchedulingStrategy and DefaultSchedulingStrategy.

Going forward, people should use `scheduling_strategy=PlacementGroupSchedulingStrategy` to define placement group for actor/task. The old way will be deprecated.
2021-12-07 23:11:31 -08:00
Jiajun Yao
6a07f03a6a
[Test] Fix flaky test_failure_2.py (#20949)
There is a race condition between `DestoryActor` (due to handle out of scope) and `CreateActor`. If `DestoryActor` happens first, then `CreateActor` will fail complaining about not being able to find the registered actor.
2021-12-07 19:44:16 -08:00
Hankpipi
67518bdc50
[serve] Reconfiguration bug fix (#20315)
As described in #18884, reconfiguration will mutate state mid-query. I try to solve this problem by adding read/write lock to each replica.

Co-authored-by: yuzihao.2001 <yuzihao.2001@bytedance.com>
2021-12-07 18:53:45 -08:00
Jiajun Yao
2208cf7672
[Ray Client] Pickle task options for ray client (#20930)
We can just pickle task options instead of json so that we don't need to write custom `to_dict` and `from_dict` methods for complex python option objects (e.g. PlacementGroup).
2021-12-07 17:07:19 -08:00
Dmitri Gekhtman
94883f61b1
[autoscaler] Event summarizer reports launch failure (#20814)
Partly addresses #20774 by registering node launcher failures in driver logs, via the event summarizer.
This way, users can tell that the launch failed from the driver logs.

Also pushes the node creation exception to driver logs, but only once per 60 minutes.
2021-12-07 16:23:45 -08:00
Yi Cheng
ea1d081aac
[core] Simple chaos testing for asio (#19970)
Right now in ray, a lot of edge cases related to grpc are not tested. This PR is just a simple try to give the developer some way to delay grpc request. It could be used with manual testing and also e2e test since it's supporting delay for specific grpc method.

To use this feature, just simple set os env `RAY_TESTING_ASIO_DELAY_US="method1=10:20,method2=20:30,*=200:200"`

This means, for `method1` it'll delay 10-20us, for method2 it'll delay 20-30us. For all the rest, it'll delay 200us.
2021-12-07 14:47:07 -08:00
architkulkarni
cd12814975
[Serve] [doc] Fix broken serve metrics doc snippet (#20925)
Was failing with `TypeError: f() missing 1 required positional argument: 'request'
`
2021-12-07 14:02:47 -08:00
Clark Zinzow
336882e32c
[Datasets] Fix boolean tensor column slicing. (#20905)
Arrow byte-packs boolean arrays, with 8 entries per byte, and both Arrow and NumPy utilize bit-based offsets for indexing into data buffers. This PR adds support for properly indexing into boolean tensor columns by using bit-based offsets for such columns.
2021-12-07 12:47:43 -08:00
xwjiang2010
011ae389a6
[tune] minor clean up of executor.restore(). (#20916)
Removes two unneeded parameters that were not used anymore
2021-12-07 16:37:48 +00:00
Eric Liang
5d5ca8fed4
[dataset] Reduce memory usage during .to_pandas() (#20921)
We shouldn't ray.get() all the blocks immediately during the to_pandas call, it's better to do it one by one. That's a little slower but to_pandas() isn't expected to be fast anyways.
2021-12-06 18:17:16 -08:00
Balaji Veeramani
17b6130f9f
[Train] Clarify Session.get_next docstring (#20877)
This PR addresses two issues an issue with the Session.get_next docstring:

It's unclear whether the tense should be imperative or non-imperative. The Ray documentation states that we use Google style (which is non-imperative), but we are formatting using PEP8 (which is imperative). Moreover, we use both imperative and non-imperative summaries across the Ray Train code. I've stuck with a non-imperative summary for consistency with the rest of the Session class.
The docstring doesn't describe the conditions under which the function returns None.
2021-12-06 17:26:25 -08:00
Dmitri Gekhtman
10ed6765c7
[Autoscaler] Add autoscaler update time prometheus metric. (#20831)
It's useful when measuring autoscaler performance to know how long the autoscaler takes for update iterations.
This PR adds a prometheus metric for that. 

The bucket ticks for the histogram are arranged in powers of ten: 
[.001, .01, .1, 1, 10, 100, 1000]. Depending on the situation, we've seen the update time range from .1 second to a few minutes.
2021-12-06 11:55:07 -08:00
shrekris-anyscale
2d58664f98
Add Pydantic validation to Serve AutoscalingConfig class (#20779) 2021-12-06 11:51:02 -08:00
architkulkarni
c024d984e5
[runtime_env] Parse local conda/pip requirements files before sending runtime env to Ray Client Server (#20885) 2021-12-06 12:04:22 -06:00
Siyuan (Ryans) Zhuang
3285bb6b9f
[train] Fix assertion in test (#20893) 2021-12-05 17:47:43 -08:00
Kai Fricke
d4413299c0
Revert "[Core] Support back pressure for actor tasks (#19936)" (#20880)
This reverts commit a4495941c2.
2021-12-03 17:48:47 -08:00
architkulkarni
ac911d6b51
[runtime_env] Change error log to debug log (#20875) 2021-12-03 18:21:43 -06:00
xwjiang2010
b0c56c8f9b
[tune] Fix testResourceScheduler and testMultiStepRun. (#20872)
This commit fixes flaky tests by relaxing strictly deterministic trial runner behavior constraints.
2021-12-03 16:07:52 -08:00
Amog Kamsetty
611bfc1352
[ML] Move find_free_port to ml_utils (#20828)
Small refactoring of common utility used by Train, Tune, and Rllib.
2021-12-03 13:38:42 -08:00