kourosh hakhamaneshi
3815e52a61
[RLlib] Agents to algos: DQN w/o Apex and R2D2, DDPG/TD3, SAC, SlateQ, QMIX, PG, Bandits ( #24896 )
2022-05-19 18:30:42 +02:00
Sven Mika
628ee4b5f0
[RLlib] Bandit tf2 fix (+ add tf2 to test cases). ( #24908 )
2022-05-18 18:58:42 +02:00
Sven Mika
8f50087908
[RLlib] AlphaZero uses training_iteration API. ( #24507 )
2022-05-18 09:58:25 +02:00
Nathan Matare
012a4c8667
[RLlib] Allow passing **kwargs to action distribution. ( #24692 )
2022-05-18 09:22:37 +02:00
Jun Gong
dea134a472
[RLlib] Clean up Policy mixins. ( #24746 )
2022-05-17 17:16:08 +02:00
Artur Niederfahrenhorst
c2a1e5abd1
[RLlib] Prioritized Replay (if required) in SimpleQ and DDPG. ( #24866 )
2022-05-17 13:53:07 +02:00
Artur Niederfahrenhorst
fb2915d26a
[RLlib] Replay Buffer API and Ape-X. ( #24506 )
2022-05-17 13:43:49 +02:00
Sven Mika
25001f6d8d
[RLlib] APPO Training iteration fn. ( #24545 )
2022-05-17 10:31:07 +02:00
Sven Mika
0cd7bc4054
[RLlib] Re-establish dashboard performance tests. ( #24728 )
2022-05-16 13:13:49 +02:00
Kai Fricke
96da5dc776
[rllib] Fix some missing agent->algorithm doc changes ( #24841 )
...
#24797 missed some doc changes that popped up in broken linkcheck. Note that there could be others that were not caught by this.
2022-05-16 11:52:49 +01:00
Jun Gong
68a9a33386
[RLlib] Retry agents -> algorithms. with proper doc changes this time. ( #24797 )
2022-05-16 09:45:32 +02:00
Artur Niederfahrenhorst
b1bc435adc
[RLlib] Policy Server/Client metrics reporting fix ( #24783 )
2022-05-15 17:25:25 +02:00
Steven Morad
6321c3a85c
[RLlib] Simple-Q TrainerConfig ( #24583 )
2022-05-15 17:24:01 +02:00
Steven Morad
5c96e7223b
[RLlib] SimpleQ (minor cleanups) and DQN TrainerConfig objects. ( #24584 )
2022-05-15 16:14:43 +02:00
Simon Mo
9f23affdc0
[Hotfix] Unbreak lint in master ( #24794 )
2022-05-13 15:05:05 -07:00
Jun Gong
bc3a1d35cf
[RLlib] Introduce new policy base classes. ( #24742 )
2022-05-13 21:48:30 +02:00
Sven Mika
8fe3fd8f7b
[RLlib] QMix TrainerConfig objects. ( #24775 )
2022-05-13 18:50:28 +02:00
kourosh hakhamaneshi
ffcbb30552
[RLlib] Move from agents
to algorithms
- CQL, MARWIL, AlphaStar, MAML, Dreamer, MBMPO. ( #24739 )
2022-05-13 18:43:36 +02:00
Steven Morad
ebe6ab0afc
[RLlib] Bandits use TrainerConfig objects. ( #24687 )
2022-05-12 22:02:15 +02:00
Max Pumperla
6a6c58b5b4
[RLlib] Config objects for DDPG and SimpleQ. ( #24339 )
2022-05-12 16:12:42 +02:00
Artur Niederfahrenhorst
95d4a83a87
[RLlib] R2D2 Replay Buffer API integration. ( #24473 )
2022-05-10 20:36:14 +02:00
Sven Mika
44a51610c2
[RLlib] SlateQ config objects. ( #24577 )
2022-05-10 20:07:18 +02:00
Sven Mika
f243895ebb
[RLlib] Dreamer ConfigObject class. ( #24650 )
2022-05-10 16:19:42 +02:00
Sven Mika
6d94b2acbe
[RLlib] AlphaStar config objects. ( #24576 )
2022-05-10 14:01:00 +02:00
Amog Kamsetty
b5b48f6cc7
[RLlib] Switch Dreamer
to training_iteration
API. ( #24488 )
2022-05-10 08:37:34 +02:00
Artur Niederfahrenhorst
8d906f9bf8
[RLlib] SAC with new Replay Buffer API. ( #24156 )
2022-05-09 14:33:02 +02:00
Artur Niederfahrenhorst
bd2fdf4752
[RLlib] Automate sequences in timeslice_along_seq_lens_with_overlap()
. ( #24561 )
2022-05-09 11:55:06 +02:00
Steven Morad
b76273357b
[RLlib] APEX-DQN replay buffer config validation fix. ( #24588 )
2022-05-09 09:59:04 +02:00
kourosh hakhamaneshi
69055f556d
[RLlib] Move agents.ars
to algorithms.ars
. ( #24516 )
2022-05-06 19:11:15 +02:00
Daewoo Lee
fee35444ab
[RLlib] Issue 24530: Fix add_time_dimension
( #24531 )
...
Co-authored-by: Daewoo Lee <dwlee@rtst.co.kr>
2022-05-06 15:21:42 +02:00
kourosh hakhamaneshi
f48f1b252c
[RLlib] Moved agents.es
to algorithms.es
( #24511 )
2022-05-06 14:54:22 +02:00
Antoni Baum
c5e1851ab9
[Tune] Improve JupyterNotebookReporter
( #24444 )
...
Improves Tune Jupyter notebook experience by modifying the `JupyterNotebookReporter` in two ways:
* Previously, the `overwrite` flag controlled whether the entire cell would be overwritten with the updated table. This caused all the other logs to be cleared. Now, we use IPython display handle functionality to create a table at the top of the cell and update only that, preserving the rest of the output. The `overwrite` flag now controls whether the cell output *prior* to the initialization of `JupyterNotebookReporter` is overwritten or not.
* The Ray Client detection was not working unless the user specifically passed a `JupyterNotebookReporter` as the `progress_reporter`. Now, the default value allows for correct detection of the enviroment while running Ray Client.
Furthermore, the progress reporter detection logic in `rllib/train.py` has been replaced to make use of the `detect_reporter` function for consistency with Tune (the sign in the overwrite condition was similarly flipped).
2022-05-06 11:52:47 +01:00
Sven Mika
7ab19ddc32
[RLlib] MADDPG: Move into agents folder (from contrib) and use training_iteration
method. ( #24502 )
2022-05-06 12:35:21 +02:00
Sven Mika
f54557073e
[RLlib] Remove execution_plan
API code no longer needed. ( #24501 )
2022-05-06 12:29:53 +02:00
Sven Mika
f891a2b6f1
[RLlib] SlateQ + tf; release test fixes, related to TD-error not properly being formatted. ( #24521 )
2022-05-06 08:50:30 +02:00
Avnish Narayan
f2bb6f6806
[RLlib] Impala training iteration fn ( #23454 )
2022-05-05 16:11:08 +02:00
Christy Bergman
76eb47e226
[RLlib; docs] Rename UCB -> LinUCB. ( #24348 )
2022-05-05 10:20:16 +02:00
Artur Niederfahrenhorst
86bc9ecce2
[RLlib] DDPG Training iteration fn & Replay Buffer API ( #24212 )
2022-05-05 09:41:38 +02:00
Sven Mika
5b61a00792
[RLlib] Feed all values in COMMON_CONFIG directly from TrainerConfig() (removes duplicate values and comments). ( #24433 )
2022-05-04 16:28:12 +02:00
Sven Mika
b48f63113b
[RLlib] SlateQ fixes: Release learning tests wrong yaml structure + TD-error torch issue ( #24429 )
2022-05-04 13:37:14 +02:00
Sven Mika
1bc6419e0e
[RLlib] R2D2 training iteration fn AND switch off execution_plan
API by default. ( #24165 )
2022-05-03 07:59:26 +02:00
Sven Mika
7cca7782f1
[RLlib] OPE (off policy estimator) API. ( #24384 )
2022-05-02 21:15:50 +02:00
Sven Mika
0c5ac3b9e8
[RLlib] Issue 24075: Better error message for Bandit MultiDiscrete (suggest using our wrapper). ( #24385 )
2022-05-02 21:14:08 +02:00
Sven Mika
296e2ebc46
[RLlib] Issue 24082: WorkerSet.policies_to_train (deprecated) - if still used - returns wrong values. ( #24386 )
2022-05-02 18:33:52 +02:00
Sven Mika
924adcf402
[RLlib] Issue 24074: multi-GPU learner thread key error in MA-scenarios. ( #24382 )
2022-05-02 18:30:46 +02:00
Sven Mika
f53ca1cacb
[RLlib] ES + ARS TrainerConfig objects. ( #24374 )
2022-05-02 16:55:28 +02:00
Edward Oakes
11954e6798
Issue 24143: Fix a few f-strings missing the f. ( #24232 )
2022-05-02 16:11:33 +02:00
Sven Mika
026849cd27
[RLlib] APPO TrainerConfig objects. ( #24376 )
2022-05-02 15:06:23 +02:00
Sven Mika
f066180ed5
[RLlib] Deprecate timesteps_per_iteration
config key (in favor of min_[sample|train]_timesteps_per_reporting
. ( #24372 )
2022-05-02 12:51:14 +02:00
Sven Mika
950bd3fc3f
[RLlib] IMPALA TrainerConfig objects. ( #24375 )
2022-05-02 12:05:30 +02:00