Artur Niederfahrenhorst
|
0dceddb912
|
[RLlib] Move learning_starts logic from buffers into training_step() . (#26032)
|
2022-08-11 13:07:30 +02:00 |
|
Sven Mika
|
130b7eeaba
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
|
Sven Mika
|
7c39aa5fac
|
[RLlib] Trainer.training_iteration -> Trainer.training_step; Iterations vs reportings: Clarification of terms. (#25076)
|
2022-06-10 17:09:18 +02:00 |
|
Sven Mika
|
b5bc2b93c3
|
[RLlib] Move all remaining algos into algorithms directory. (#25366)
|
2022-06-04 07:35:24 +02:00 |
|
Artur Niederfahrenhorst
|
d76ef9add5
|
[RLLib] Fix RNNSAC example failing on CI + fixes for recurrent models for other Q Learning Algos. (#24923)
|
2022-05-24 14:39:43 +02:00 |
|
Sven Mika
|
ec89fe5203
|
[RLlib] APEX-DQN and R2D2 config objects. (#25067)
|
2022-05-23 12:15:45 +02:00 |
|
Steven Morad
|
501d932449
|
[RLlib] SAC, RNNSAC, and CQL TrainerConfig objects (#25059)
|
2022-05-22 19:58:47 +02:00 |
|
kourosh hakhamaneshi
|
3815e52a61
|
[RLlib] Agents to algos: DQN w/o Apex and R2D2, DDPG/TD3, SAC, SlateQ, QMIX, PG, Bandits (#24896)
|
2022-05-19 18:30:42 +02:00 |
|