.. |
a2c
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
a3c
|
[RLlib] Algorithm step() fixes: evaluation should NOT be part of timed training_step loop. (#25924)
|
2022-06-20 19:53:47 +02:00 |
alpha_star
|
[RLlib] Move IMPALA and APPO back to exec plan (for now; due to unresolved learning/performance issues). (#25851)
|
2022-06-29 08:41:47 +02:00 |
alpha_zero
|
[RLlib] Make QMix use the ReplayBufferAPI (#25560)
|
2022-06-23 22:55:22 -07:00 |
apex_ddpg
|
[RLlib] Algorithm step() fixes: evaluation should NOT be part of timed training_step loop. (#25924)
|
2022-06-20 19:53:47 +02:00 |
apex_dqn
|
Revert "[RLlib] Small Ape-X deflake. (#26078)" (#26191)
|
2022-06-29 10:25:47 -07:00 |
appo
|
[RLlib] Move IMPALA and APPO back to exec plan (for now; due to unresolved learning/performance issues). (#25851)
|
2022-06-29 08:41:47 +02:00 |
ars
|
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869)
|
2022-06-20 15:54:00 +02:00 |
bandit
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
bc
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
cql
|
[RLlib] Move offline input into replay buffer using rollout ops in CQL. (#25629)
|
2022-06-17 17:08:55 +02:00 |
crr
|
[RLlib] Added expectation advantage_type option to CRR. (#26142)
|
2022-06-28 15:40:09 +02:00 |
ddpg
|
[RLlib] Migrating DDPG to PolicyV2. (#26054)
|
2022-06-28 15:52:56 +02:00 |
ddppo
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
dqn
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
dreamer
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
es
|
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869)
|
2022-06-20 15:54:00 +02:00 |
impala
|
[RLlib] Move IMPALA and APPO back to exec plan (for now; due to unresolved learning/performance issues). (#25851)
|
2022-06-29 08:41:47 +02:00 |
maddpg
|
[RLlib] Save serialized PolicySpec. Extract num_gpus related logics into a util function. (#25954)
|
2022-06-30 11:38:21 +02:00 |
maml
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
marwil
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
mbmpo
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
pg
|
Revert "[RLlib] Remove execution plan code no longer used by RLlib. (#25624)" (#25776)
|
2022-06-14 13:59:15 -07:00 |
ppo
|
[RLlib] Cleanup some deprecated metric keys and classes. (#26036)
|
2022-06-23 21:30:01 +02:00 |
qmix
|
[RLlib] Make QMix use the ReplayBufferAPI (#25560)
|
2022-06-23 22:55:22 -07:00 |
r2d2
|
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869)
|
2022-06-20 15:54:00 +02:00 |
sac
|
[RLlib] Migrating DDPG to PolicyV2. (#26054)
|
2022-06-28 15:52:56 +02:00 |
simple_q
|
[RLlib] SimpleQ PyTorch Multi GPU fix (#26109)
|
2022-06-28 12:12:56 +02:00 |
slateq
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
td3
|
[RLlib] More Trainer -> Algorithm renaming cleanups. (#25869)
|
2022-06-20 15:54:00 +02:00 |
tests
|
[RLlib] Algorithm step() fixes: evaluation should NOT be part of timed training_step loop. (#25924)
|
2022-06-20 19:53:47 +02:00 |
__init__.py
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
algorithm.py
|
[RLlib] Save serialized PolicySpec. Extract num_gpus related logics into a util function. (#25954)
|
2022-06-30 11:38:21 +02:00 |
algorithm_config.py
|
[RLlib] EnvRunnerV2 and EpisodeV2 that support Connectors. (#25922)
|
2022-06-30 08:44:10 +02:00 |
callbacks.py
|
[RLlib] EnvRunnerV2 and EpisodeV2 that support Connectors. (#25922)
|
2022-06-30 08:44:10 +02:00 |
mock.py
|
[RLlib] Trainer to Algorithm renaming. (#25539)
|
2022-06-11 15:10:39 +02:00 |
registry.py
|
[RLlib] Fixes logging of all of RLlib's Algorithm names as warning messages. (#25840)
|
2022-06-17 08:41:18 +02:00 |