Commit graph

10724 commits

Author SHA1 Message Date
Chen Shen
80eb00f525
[Chaos] fix dataset chaos test #21113 2021-12-15 20:13:38 -08:00
Simon Mo
e453bfdb8e
[Serve] Run long poll callbacks in event loop (#21104) 2021-12-15 16:27:08 -08:00
Yi Cheng
abdf9b5f3c
[nightly] Fix benchmark commit check failure (#21119)
It looks like somehow `pip3 install -U` won't update ray anymore, and we need to uninstall before installing.
2021-12-15 14:54:03 -08:00
Matti Picus
d2cd0730a0
[Windows] Enable test_advanced_2 on windows (#20994) 2021-12-15 14:30:40 -08:00
Sven Mika
e485aa846a
[RLlib; Docs overhaul] Overhaul of auto-API reference pages (via sphinx autoclass/automodule). (#19786) 2021-12-15 22:32:52 +01:00
Ian Rodney
c7fb5a94d1
[CI] Upgrade Pip to 21.3 (#21111) 2021-12-15 13:29:45 -08:00
Clark Zinzow
ec06a1f65e
[CUJ#2] Update nightly test for CUJ#2 #21064 2021-12-15 13:19:59 -08:00
Chen Shen
03e05df9cb
[Core] fix wrong memory size reporting #21089
The current resource reporting is run in OSS. Revert the change. For example it reported

InitialConfigResources: {node:172.31.45.118: 1.000000}, {object_store_memory: 468605759.960938 GiB},
For 10GB memory object_store.
2021-12-15 10:24:35 -08:00
xwjiang2010
33b6cd9105
[tune] _cached_actor_pg can only be non empty when reuse_actors = True. (#21067)
Removes dead code path.
2021-12-15 18:59:29 +01:00
SangBin Cho
2878161a28
[Core] Properly implement some blocking RPCs with promise. Actor + KV store (#20849)
This PR implements gRPC timeout for various blocking RPCs.

Previously, the timeout with promise didn't work properly because the client didn't cancel the timed out RPCs. This PR will properly implement RPC timeout.

This PR supports;

- Blocking RPCs for core APIs, creating / getting / removing actor + pg.
- Internal KV ops

The global state accessor also has the infinite blocking calls which we need to fix. But fixing them requires a huge refactoring, so this will be done in a separate PR. 

Same for the placement group calls (they will be done in a separate PR)

Also, this means we can have scenario such as the client receives the DEADLINE EXCEEDED error, but the handler is invoked. Right now, this is not handled correctly in Ray. We should start thinking about how to handle these scenarios better.
2021-12-15 06:46:43 -08:00
simonsays1980
1a8aa2da1f
[RLlib] Added `tensorlib=numpy' to 'restore_original_dimensions()' such that … (#20342) 2021-12-15 14:03:18 +01:00
Alexis DUBURCQ
6c3e63bc9c
[RLlib] Fix view requirements. (#21043) 2021-12-15 11:59:04 +01:00
Jun Gong
767f78eaf8
[RLlib] Always attach latest eval metrics. (#21011) 2021-12-15 11:42:53 +01:00
SangBin Cho
1c1430ff5c
Add memory monitor to scalability tests. (#21102)
This adds memory monitoring to scalability envelope tests so that we can compare the peak memory usage for both nonHA & HA.

NOTE: the current way of adding memory monitor is not great, and we should implement fixture to support this better, but that's not in progress yet.
2021-12-15 01:31:38 -08:00
Kai Fricke
ecbf29ec03
[tune] Fix best_trial_str for nested custom parameter columns (#21078)
Currently, custom nested parameter_columns are not printed correctly in the best trial string. This PR fixes the printing error and adds a unit test.
2021-12-15 10:26:22 +01:00
Ian Rodney
deb3505150
[Java] Bump Log4j2 to completely remove lookups (#21081)
As per the 2.16.0 release of Lo4j2, Lookup support is removed 🎉 
https://logging.apache.org/log4j/2.x/changes-report.html#a2.16.0
2021-12-15 15:45:56 +08:00
WanXing Wang
1c3506a2aa
[Streaming]Fix potential memory problems when delete buffer. (#21101)
`delete buffer` -> `delete[] buffer` to fix potential memory problems under C++14, such as jemalloc deadlock.
2021-12-15 15:24:23 +08:00
Jiao
e9daacff60
[Job][Docs] Update docs architecture image link (#21087) 2021-12-14 23:07:38 -08:00
Jiao
ed34434131
[Jobs] Add log streaming for jobs (#20976)
Current logs API simply returns a str to unblock development and integration. We should add proper log streaming for better UX and external job manager integration.

Co-authored-by: Sven Mika <sven@anyscale.io>
Co-authored-by: sven1977 <svenmika1977@gmail.com>
Co-authored-by: Ed Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Avnish Narayan <38871737+avnishn@users.noreply.github.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>
2021-12-14 17:01:53 -08:00
Edward Oakes
10947c83b3
[runtime_env] Make pip installs incremental (#20341)
Uses a direct `pip install` instead of creating a conda env to make pip installs incremental to the cluster environment.

Separates the handling of `pip` and `conda` dependencies.

The new `pip` approach still works if only the base Ray is installed on the cluster and the user specifies libraries like "ray[serve]" in the `pip` field.  The mechanism is as follows:
- We don't actually want to reinstall ray via pip, since this could lead to version mismatch issues.  Instead, we want to use the Ray that's already installed in the cluster.
- So if "ray" was included by the user in the pip list, remove it
- If a library "ray[serve]" or "ray[tune, rllib]" was included in the pip list, remove it and replace it by its dependencies (e.g. "uvicorn", "requests", ..)

Co-authored-by: architkulkarni <arkulkar@gmail.com>
Co-authored-by: architkulkarni <architkulkarni@users.noreply.github.com>
2021-12-14 15:55:18 -08:00
Gagandeep Singh
57cc76cf5e
[windows][ci] Increase in num_tasks for stable wait times in test_worker_capping:test_exponential_wait (#21051)
The `results` with lesser tasks contains un-stable wait times, so increased the number of tasks in a hope for less noisy wait times. Minor in changes in assert comparisons have also been made for lesser error.
2021-12-14 15:54:55 -08:00
newmanwang
42a108ff60
[gcs] Fix can not resue actor name in same namespace (#21053)
Before this PR, GcsActorManager::CreateActor() would replace actor's namespace by
actors's owner job's namespace, even if actor is created by user with a user specified
namespace. But in named_actors_, actor is set to use user specified namespace by
GcsActorManager::RegisterActor before CreateActor() is called, So that
GcsActorManager::DestroyActor failed to find actor from named_actors_ by owner job's
namespace to remove, hence reuse actor name in same namespace failed for same name actor
not removed by GcsActorManager::DestroyActor in named_actors_.

issue #20611
2021-12-14 14:20:49 -08:00
SangBin Cho
5665b69fff
[Internal Observability] Record GCS debug stats to metrics (#20993)
Streamline all existing GCS debug state to metrics.
2021-12-14 14:19:37 -08:00
Junwen Yao
8325a32d66
[Train] Update saving / loading checkpoint documentation (#20973)
This PR updates saving / loading checkpoint examples.

Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
2021-12-14 09:53:17 -08:00
Matti Picus
aec04989fc
WINDOWS: enable test_advanced_3.py (#21056) 2021-12-14 09:25:23 -08:00
Antoni Baum
dad6ac2f0a
[tune] Move _head_bundle_is_empty after conversion (#21039) 2021-12-14 17:45:07 +01:00
SangBin Cho
7baf62386a
[Core] Shorten the GCS dead detection to 60 seconds instead of 10 minutes. (#20900)
Currently, when the GCS RPC failed with gRPC unavailable error because the GCS is dead, it will retry forever. 

b3a9d4d87d/src/ray/rpc/gcs_server/gcs_rpc_client.h (L57)

And it takes about 10 minutes to detect the GCS server failure, meaning if GCS is dead, users will notice in 10 minutes.

This can easily cause confusion that the cluster is hanging (since users are not that patient). Also, since GCS is not fault tolerant in OSS now, 10 minutes are too long timeout to detect GCS death.

This PR changes the value to 60 seconds, which I believe is much more reasonable (since this is the same value as our blocking RPC call timeout).
2021-12-14 07:50:45 -08:00
SangBin Cho
8a943e8081
[Core] Fix log monitor concurrency issue (#20987)
This fixes the bug where empty line is printed to the driver when multi threads are used.

e.g.,

```
2021-12-12 23:20:06,876	INFO worker.py:853 -- Connecting to existing Ray cluster at address: 127.0.0.1:6379
(TESTER pid=12344) 
(TESTER pid=12348) 
```

## How does the current log work?
After actor initialization method is done, it prints ({repr(actor)}) to the stdout and stderr, which will be read by log monitor. Log monitor now can parse the actor repr and starts publishing log messages to the driver with the repr name.

## Problem
If actor init method contains a new background thread, when we call print({repr(actor)}), it seems  flush() is not happening atomically. Based on my observation, it seems to flush the repr(actor) first, and then we flush the end="\n" (the default end parameter of print function) after. 

Since log monitor never closes the file descriptor, it is possible it reads the log line before the end separator "\n" is flushed. That says, this sort of scenario can happen.

Expected
- `"repr(actor)\n"` is written to the log file. (which includes the default print end separator `"\n"`).
- Log monitor reads `repr(actor)\n` and publishes it.

Edge case
- `"repr(actor)"` is written to the log
-  Log monitor publishes `repr(actor)`.
- `"\n"` is written to the log (end separator).
- Log monitor publishes `"\n"`.

Note that this is only happening when we print the actor repr "after" actor initialization. I think since new thread is running in the background, GIL comes in, and it creates the gap between `flush(repr(actor))` and `flush("\n")`, which causes the issue.

I verified the is fixed when I add the separator ("\n") directly to the string, and make the end separator as an  empty string. 

Alternative fix is to file lock the log file whenever log monitor reads / core worker writes, but it seems to be too heavy solution compared to its benefit.
2021-12-14 06:32:30 -08:00
SangBin Cho
e1adf2716f
[Core] Allow to configure dashboard agent grpc port. (#20936)
Currently, there's no way to specify grpc port for dashboard agents (only http ports are allowed to be configured). This PR allows users to do that.

This also add more features; 
- Format the port conflict error message better. I found the default worker ports are from 10002-19999, which spams the terminal. It is now formatted as 10002-19999.
- Add dashboard http port to the port conflict list.
2021-12-14 06:31:22 -08:00
Yi Cheng
613a7cc61d
[flaky] deflaky test_multi_node_3 (#21069)
`test_multi_node_3` failed because we kill the raylet before the cluster is up which leads the raylet to become a zombie process. This fix wait until the cluster up and kill it.
2021-12-14 00:17:01 -08:00
Chen Shen
3c426ed7b5
[nighly-test] fix dataset nightly test reporting #21061 2021-12-14 00:05:40 -08:00
WanXing Wang
72bd2d7e09
[Core] Support back pressure for actor tasks. (#20894)
Resubmit the PR https://github.com/ray-project/ray/pull/19936

I've figure out that the test case `//rllib:tests/test_gpus::test_gpus_in_local_mode` failed due to deadlock in local mode.
In local mode, if the user code submits another task during the executing of current task, the `CoreWorker::actor_task_mutex_` may cause deadlock.
The solution is quite simple, release the lock before executing task in local mode.

In the commit 7c2f61c76c:
1. Release the lock in local mode to fix the bug. @scv119 
2. `test_local_mode_deadlock` added to cover the case. @rkooo567 
3. Left a trivial change in `rllib/tests/test_gpus.py` to make the `RAY_CI_RLLIB_DIRECTLY_AFFECTED ` to take effect.
2021-12-13 23:56:07 -08:00
Kai Yang
f5dfe6c158
[Dataset] [DataFrame 1/n] Refactor table block structure to support potentially more block formats (#20721)
This is the first PR of a series of Ray Dataset and Pandas integration improvements.

This PR refactors `ArrowRow`, `ArrowBlockBuilder`, `ArrowBlockAccessor` by extracting base classes `TableRow`, `TableBlockBuilder`, `TableBlockAccessor`, which can then be inherited for pandas DataFrame support in the next PR.
2021-12-13 22:34:59 -08:00
Yi Cheng
30d3115c45
[gcs] print log for storage setup of gcs (#21013)
In this PR, logs are printed so that we can check what's the setup of gcs the cluster is using. This is useful for debugging and checking.
2021-12-13 14:02:45 -08:00
iasoon
33059cff3d
[serve] support not exposing deployments over http (#21042) 2021-12-13 09:43:55 -08:00
Balaji Veeramani
ce9ddf659b
[Train] Convert TrainingResult to dataclass (#20952) 2021-12-13 09:07:52 -08:00
xwjiang2010
f395b6310f
[tune] Show the name of training func, instead of just ImplicitFunction. (#21029) 2021-12-13 16:28:44 +00:00
Kai Fricke
b58f839534
[ci/release] Remove hard numpy removal from app configs (#21005) 2021-12-13 15:22:02 +00:00
Shantanu
b743f514d7
[core] do not silently re-enable gc (#20864)
In our code, we wish to have control over when GC runs. We do this by `gc.disable()` and then manually triggering GC at moments that work for us. This does not work if Ray re-enables GC.

Co-authored-by: hauntsaninja <>
2021-12-13 06:03:03 -08:00
Jules S. Damji
064f976eb4
Added hyperparameters to the concepts section (#21024)
Added hyperameters to the concetp section since it's important to explain what they are and added diagrams help readeer visualize the difference between model and hyperparameters

Signed-off-by: Jules S.Damji <jules@anyscale.com>
Co-authored-by: Jules S.Damji <jules@anyscale.com>
2021-12-13 12:21:39 +00:00
Sven Mika
daa4304a91
[RLlib] Switch off preprocessors by default for PGTrainer. (#21008) 2021-12-13 12:04:23 +01:00
Matti Picus
6c6c76c3f0
Starting workers map (#20986)
PR #19014 introduced the idea of a StartupToken to uniquely identify a worker via a counter. This PR:
- returns the Process and the StartupToken from StartWorkerProcess (previously only Process was returned)
- Change the starting_workers_to_tasks map to index via the StartupToken, which seems to fix the windows failures.
- Unskip the windows tests in test_basic_2.py
It seems once a fix to PR #18167 goes in, the starting_workers_to_tasks map will be removed, which should remove the need for the changes to StartWorkerProcess made in this PR.
2021-12-12 19:28:53 -08:00
Seonggwon Yoon
f1acabe9cf
Bump log4j from 2.14.0 to 2.15.0 (#21036)
Fix Remote code injection in Log4j
Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.

Check this refer: [CVE-2021-44228](https://github.com/advisories/GHSA-jfh8-c2jp-5v3q)
2021-12-12 15:07:50 +08:00
Yi Cheng
f4e6623522
Revert "Revert "[core] Ensure failed to register worker is killed and print better log"" (#21028)
Reverts ray-project/ray#21023
Revert this one since 7fc9a9c227 has fixed the issue
2021-12-11 20:49:47 -08:00
Sven Mika
db058d0fb3
[RLlib] Rename metrics_smoothing_episodes into metrics_num_episodes_for_smoothing for clarity. (#20983) 2021-12-11 20:33:35 +01:00
Sven Mika
596c8e2772
[RLlib] Experimental no-flatten option for actions/prev-actions. (#20918) 2021-12-11 14:57:58 +01:00
Eric Liang
6f93ea437e
Remove the flaky test tag (#21006) 2021-12-11 01:03:17 -08:00
mwtian
3028ba0f98
[Core][GCS] add feature flag for GCS bootstrapping, and flag to pass GCS address to raylet (#21003) 2021-12-10 23:48:37 -08:00
Jiajun Yao
f04ee71dc7
Fix driver lease request infinite loop when local raylet dies (#20859)
Currently if local lease request fails due to raylet death, direct_task_transport.cc will retry forever for driver.

With this PR, we treat grpc unavailable as non-retryable error (the assumption is that local grpc is always reliable and grpc unavailable error indicates that server is dead) and will just fail the task.

Note: this PR doesn't try to address a bigger problem: don't crash driver when local raylet dies. We have multiple places in the code that assumes the local raylet never fail and have CHECK_STATUS_OK for that. All these places need to be changed so we can properly propagate failures to the user.
2021-12-10 18:02:59 -08:00
mwtian
b9bcd6215a
Disable two tests that are very flaky in GCS HA build (#21012)
`//python/ray/tests:test_client_reconnect` seems to only flake under GCS HA build. The client server starts to shutdown under injected failures, unlike the behavior without GCS KV or pubsub.

`//python/ray/tests:test_multi_node_3` seems to flake more often under GCS HA build, although it is still flaky without GCS HA feature flags. It seems raylet termination did not notify other processes properly.

Disable these two tests before they are fixed.
2021-12-10 17:08:25 -08:00