Commit graph

1210 commits

Author SHA1 Message Date
Sven Mika
e73c37cc17
[RLlib] MADDPG: Move into main algorithms folder and add proper unit and learning tests. (#24579) 2022-05-24 12:53:53 +02:00
Sven Mika
4e99a57bab
[RLlib] Add @OverrideToImplementCustomLogic decorators to some Trainer class methods. (#24684) 2022-05-24 11:30:50 +02:00
Sven Mika
ec89fe5203
[RLlib] APEX-DQN and R2D2 config objects. (#25067) 2022-05-23 12:15:45 +02:00
Sven Mika
dea9b86a16
[RLlib] MAML config objects. (#25066) 2022-05-23 10:14:24 +02:00
Sven Mika
baf8c2fa1e
[RLlib] TD3 config objects. (#25065) 2022-05-23 10:07:13 +02:00
Sven Mika
09886d7ab8
[RLlib] Upgrade gym 0.23 (#24171) 2022-05-23 08:18:44 +02:00
Artur Niederfahrenhorst
cd16dc4dae
[RLlib] Fix estimated buffer size in replay buffers. (#24848) 2022-05-22 21:03:23 +02:00
Steven Morad
501d932449
[RLlib] SAC, RNNSAC, and CQL TrainerConfig objects (#25059) 2022-05-22 19:58:47 +02:00
Sven Mika
44773e810b
[RLlib] DD-PPO Config objects. (#25028) 2022-05-22 13:05:24 +02:00
Eric Liang
55d039af32
Annotate datasources and add API annotation check script (#24999)
Why are these changes needed?
Add API stability annotations for datasource classes, and add a linter to check all data classes have appropriate annotations.
2022-05-21 15:05:07 -07:00
Rohan Potdar
5a70b732e8
[RLlib] MARWIL and BC Config. (#24853) 2022-05-21 12:50:20 +02:00
Jun Gong
d5a6d46049
[RLlib] Migrate MAML, MB-MPO, MARWIL, and BC to use Policy sub-classing implementation. (#24914) 2022-05-20 14:10:59 +02:00
Kai Fricke
3e053c85ee
[RLlib] Fix broken links from agent -> algo conversion. (#25014) 2022-05-20 11:37:11 +02:00
kourosh hakhamaneshi
3815e52a61
[RLlib] Agents to algos: DQN w/o Apex and R2D2, DDPG/TD3, SAC, SlateQ, QMIX, PG, Bandits (#24896) 2022-05-19 18:30:42 +02:00
Sven Mika
628ee4b5f0
[RLlib] Bandit tf2 fix (+ add tf2 to test cases). (#24908) 2022-05-18 18:58:42 +02:00
Sven Mika
8f50087908
[RLlib] AlphaZero uses training_iteration API. (#24507) 2022-05-18 09:58:25 +02:00
Nathan Matare
012a4c8667
[RLlib] Allow passing **kwargs to action distribution. (#24692) 2022-05-18 09:22:37 +02:00
Jun Gong
dea134a472
[RLlib] Clean up Policy mixins. (#24746) 2022-05-17 17:16:08 +02:00
Artur Niederfahrenhorst
c2a1e5abd1
[RLlib] Prioritized Replay (if required) in SimpleQ and DDPG. (#24866) 2022-05-17 13:53:07 +02:00
Artur Niederfahrenhorst
fb2915d26a
[RLlib] Replay Buffer API and Ape-X. (#24506) 2022-05-17 13:43:49 +02:00
Sven Mika
25001f6d8d
[RLlib] APPO Training iteration fn. (#24545) 2022-05-17 10:31:07 +02:00
Sven Mika
0cd7bc4054
[RLlib] Re-establish dashboard performance tests. (#24728) 2022-05-16 13:13:49 +02:00
Kai Fricke
96da5dc776
[rllib] Fix some missing agent->algorithm doc changes (#24841)
#24797 missed some doc changes that popped up in broken linkcheck. Note that there could be others that were not caught by this.
2022-05-16 11:52:49 +01:00
Jun Gong
68a9a33386
[RLlib] Retry agents -> algorithms. with proper doc changes this time. (#24797) 2022-05-16 09:45:32 +02:00
Artur Niederfahrenhorst
b1bc435adc
[RLlib] Policy Server/Client metrics reporting fix (#24783) 2022-05-15 17:25:25 +02:00
Steven Morad
6321c3a85c
[RLlib] Simple-Q TrainerConfig (#24583) 2022-05-15 17:24:01 +02:00
Steven Morad
5c96e7223b
[RLlib] SimpleQ (minor cleanups) and DQN TrainerConfig objects. (#24584) 2022-05-15 16:14:43 +02:00
Simon Mo
9f23affdc0
[Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
Jun Gong
bc3a1d35cf
[RLlib] Introduce new policy base classes. (#24742) 2022-05-13 21:48:30 +02:00
Sven Mika
8fe3fd8f7b
[RLlib] QMix TrainerConfig objects. (#24775) 2022-05-13 18:50:28 +02:00
kourosh hakhamaneshi
ffcbb30552
[RLlib] Move from agents to algorithms - CQL, MARWIL, AlphaStar, MAML, Dreamer, MBMPO. (#24739) 2022-05-13 18:43:36 +02:00
Steven Morad
ebe6ab0afc
[RLlib] Bandits use TrainerConfig objects. (#24687) 2022-05-12 22:02:15 +02:00
Max Pumperla
6a6c58b5b4
[RLlib] Config objects for DDPG and SimpleQ. (#24339) 2022-05-12 16:12:42 +02:00
Artur Niederfahrenhorst
95d4a83a87
[RLlib] R2D2 Replay Buffer API integration. (#24473) 2022-05-10 20:36:14 +02:00
Sven Mika
44a51610c2
[RLlib] SlateQ config objects. (#24577) 2022-05-10 20:07:18 +02:00
Sven Mika
f243895ebb
[RLlib] Dreamer ConfigObject class. (#24650) 2022-05-10 16:19:42 +02:00
Sven Mika
6d94b2acbe
[RLlib] AlphaStar config objects. (#24576) 2022-05-10 14:01:00 +02:00
Amog Kamsetty
b5b48f6cc7
[RLlib] Switch Dreamer to training_iteration API. (#24488) 2022-05-10 08:37:34 +02:00
Artur Niederfahrenhorst
8d906f9bf8
[RLlib] SAC with new Replay Buffer API. (#24156) 2022-05-09 14:33:02 +02:00
Artur Niederfahrenhorst
bd2fdf4752
[RLlib] Automate sequences in timeslice_along_seq_lens_with_overlap(). (#24561) 2022-05-09 11:55:06 +02:00
Steven Morad
b76273357b
[RLlib] APEX-DQN replay buffer config validation fix. (#24588) 2022-05-09 09:59:04 +02:00
kourosh hakhamaneshi
69055f556d
[RLlib] Move agents.ars to algorithms.ars. (#24516) 2022-05-06 19:11:15 +02:00
Daewoo Lee
fee35444ab
[RLlib] Issue 24530: Fix add_time_dimension (#24531)
Co-authored-by: Daewoo Lee <dwlee@rtst.co.kr>
2022-05-06 15:21:42 +02:00
kourosh hakhamaneshi
f48f1b252c
[RLlib] Moved agents.es to algorithms.es (#24511) 2022-05-06 14:54:22 +02:00
Antoni Baum
c5e1851ab9
[Tune] Improve JupyterNotebookReporter (#24444)
Improves Tune Jupyter notebook experience by modifying the `JupyterNotebookReporter` in two ways:
* Previously, the `overwrite` flag controlled whether the entire cell would be overwritten with the updated table. This caused all the other logs to be cleared. Now, we use IPython display handle functionality to create a table at the top of the cell and update only that, preserving the rest of the output. The `overwrite` flag now controls whether the cell output *prior* to the initialization of `JupyterNotebookReporter` is overwritten or not.
* The Ray Client detection was not working unless the user specifically passed a `JupyterNotebookReporter` as the `progress_reporter`. Now, the default value allows for correct detection of the enviroment while running Ray Client.

Furthermore, the progress reporter detection logic in `rllib/train.py` has been replaced to make use of the `detect_reporter` function for consistency with Tune (the sign in the overwrite condition was similarly flipped).
2022-05-06 11:52:47 +01:00
Sven Mika
7ab19ddc32
[RLlib] MADDPG: Move into agents folder (from contrib) and use training_iteration method. (#24502) 2022-05-06 12:35:21 +02:00
Sven Mika
f54557073e
[RLlib] Remove execution_plan API code no longer needed. (#24501) 2022-05-06 12:29:53 +02:00
Sven Mika
f891a2b6f1
[RLlib] SlateQ + tf; release test fixes, related to TD-error not properly being formatted. (#24521) 2022-05-06 08:50:30 +02:00
Avnish Narayan
f2bb6f6806
[RLlib] Impala training iteration fn (#23454) 2022-05-05 16:11:08 +02:00
Christy Bergman
76eb47e226
[RLlib; docs] Rename UCB -> LinUCB. (#24348) 2022-05-05 10:20:16 +02:00