.. |
a3c
|
Revert "Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test."" (#18708)
|
2022-02-10 13:44:22 +01:00 |
alpha_star
|
Revert "Revert "[RLlib] AlphaStar: Parallelized, multi-agent/multi-GPU learni…" (#22153)
|
2022-02-08 16:43:00 +01:00 |
ars
|
[RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246)
|
2022-02-10 02:58:43 +01:00 |
bandit
|
[RLlib] Enable Bandits to work in batches mode(s) (vector envs + multiple workers + train_batch_sizes > 1). (#22465)
|
2022-02-17 22:32:26 +01:00 |
cql
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
ddpg
|
[RLlib] Put env-checker on critical path. (#22191)
|
2022-02-17 14:06:14 +01:00 |
dqn
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
dreamer
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
es
|
[RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246)
|
2022-02-10 02:58:43 +01:00 |
impala
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
maml
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
marwil
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
mbmpo
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
pg
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
ppo
|
[RLlib] Issue 22444: KL-coeff not stored in persistent policy state. (#22590)
|
2022-02-24 22:05:36 +01:00 |
qmix
|
[CI] Replace YAPF disables with Black disables (#21982)
|
2022-02-08 16:29:25 -08:00 |
sac
|
[RLlib] Put env-checker on critical path. (#22191)
|
2022-02-17 14:06:14 +01:00 |
slateq
|
[RLlib] SlateQ: framework=tf fixes and SlateQ documentation update (#22543)
|
2022-02-23 13:03:45 +01:00 |
tests
|
[RLlib] Bug fix: eval-workers in offline RL setup have no env, even though eval_config includes env key. (#22350)
|
2022-02-15 09:32:43 +01:00 |
__init__.py
|
[CI] Format Python code with Black (#21975)
|
2022-01-29 18:41:57 -08:00 |
callbacks.py
|
[RLlib] SlateQ: framework=tf fixes and SlateQ documentation update (#22543)
|
2022-02-23 13:03:45 +01:00 |
mock.py
|
[CI] Format Python code with Black (#21975)
|
2022-01-29 18:41:57 -08:00 |
registry.py
|
Revert "Revert "[RLlib] AlphaStar: Parallelized, multi-agent/multi-GPU learni…" (#22153)
|
2022-02-08 16:43:00 +01:00 |
trainer.py
|
[RLlib] Add a callback for when trainer finishes initialization: on_trainer_init . (#22493)
|
2022-02-22 08:18:32 +01:00 |
trainer_template.py
|
Revert "Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test."" (#18708)
|
2022-02-10 13:44:22 +01:00 |