Avnish Narayan
|
f2bb6f6806
|
[RLlib] Impala training iteration fn (#23454)
|
2022-05-05 16:11:08 +02:00 |
|
Steven Morad
|
00922817b6
|
[RLlib] Rewrite PPO to use training_iteration + enable DD-PPO for Win32. (#23673)
|
2022-04-11 08:39:10 +02:00 |
|
simonsays1980
|
d2a3948845
|
[RLlib] Removed the sampler() function in the ParallelRollouts() as it is no needed. (#22320)
|
2022-03-31 09:06:30 +02:00 |
|
Max Pumperla
|
60054995e6
|
[docs] fix doctests and activate CI (#23418)
|
2022-03-24 17:04:02 -07:00 |
|
Siyuan (Ryans) Zhuang
|
0c74ecad12
|
[Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128)
|
2022-03-15 17:34:21 +01:00 |
|
Balaji Veeramani
|
7f1bacc7dc
|
[CI] Format Python code with Black (#21975)
See #21316 and #21311 for the motivation behind these changes.
|
2022-01-29 18:41:57 -08:00 |
|
Sven Mika
|
ee41800c16
|
[RLlib] Preparatory PR for multi-agent, multi-GPU learning agent (alpha-star style) #02. (#21649)
|
2022-01-27 22:07:05 +01:00 |
|
Sven Mika
|
371fbb17e4
|
[RLlib] Make policies_to_train more flexible via callable option. (#20735)
|
2022-01-27 12:17:34 +01:00 |
|
Sven Mika
|
90c6b10498
|
[RLlib] Decentralized multi-agent learning; PR #01 (#21421)
|
2022-01-13 10:52:55 +01:00 |
|
Sven Mika
|
f94bd99ce4
|
[RLlib] Issue 21044: Improve error message for "multiagent" dict checks. (#21448)
|
2022-01-11 19:50:03 +01:00 |
|
Sven Mika
|
853d10871c
|
[RLlib] Issue 18499: PGTrainer with training_iteration fn does not support multi-GPU. (#21376)
|
2022-01-05 18:22:33 +01:00 |
|
Sven Mika
|
62dbf26394
|
[RLlib] POC: Run PGTrainer w/o the distr. exec API (Trainer's new training_iteration method). (#20984)
|
2021-12-21 08:39:05 +01:00 |
|
Sven Mika
|
ed85f59194
|
[RLlib] Unify all RLlib Trainer.train() -> results[info][learner][policy ID][learner_stats] and add structure tests. (#18879)
|
2021-09-30 16:39:05 +02:00 |
|
Sven Mika
|
05a55a9335
|
[RLlib] Issue 18668: Unity3D env client/server example not working (fix + add to test cases). (#18942)
|
2021-09-30 08:30:20 +02:00 |
|
Chris Bamford
|
58a73821fb
|
[RLlib] IMPALA sample throughput calculation and full queue slowdown fixes (#17822)
|
2021-08-17 14:01:41 +02:00 |
|
Sven Mika
|
53206dd440
|
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531)
|
2021-06-30 12:32:11 +02:00 |
|
Sven Mika
|
c3a15ecc0f
|
[RLlib] Issue #13802: Enhance metrics for multiagent->count_steps_by=agent_steps setting. (#14033)
|
2021-03-18 20:27:41 +01:00 |
|
Sven Mika
|
e40b14d255
|
[RLlib] Batch-size for truncate_episode batch_mode should be confgurable in agent-steps (rather than env-steps), if needed. (#12420)
|
2020-12-08 16:41:45 -08:00 |
|
Eric Liang
|
daa03ba6e6
|
[rllib] Add execution module to package ref (#10941)
* add init
* add
* update
|
2020-09-21 23:03:06 -07:00 |
|
Sven Mika
|
805dad3bc4
|
[RLlib] SAC algo cleanup. (#10825)
|
2020-09-20 11:27:02 +02:00 |
|
Sven Mika
|
2256047876
|
[RLlib] Rename rllib.utils.types into typing to match built-in python module's name. (#10114)
|
2020-08-15 13:24:22 +02:00 |
|
Eric Liang
|
1e0e1a45e6
|
[rllib] Add type annotations for evaluation/, env/ packages (#9003)
|
2020-06-19 13:09:05 -07:00 |
|
Eric Liang
|
9a83908c46
|
[rllib] Deprecate policy optimizers (#8345)
|
2020-05-21 10:16:18 -07:00 |
|
Eric Liang
|
9f04a65922
|
[rllib] Add PPO+DQN two trainer multiagent workflow example (#8334)
|
2020-05-07 23:40:29 -07:00 |
|
Eric Liang
|
ee0eb44a32
|
Rename async_queue_depth -> num_async (#8207)
* rename
* lint
|
2020-05-05 01:38:10 -07:00 |
|
Eric Liang
|
baadbdf8d4
|
[rllib] Execute PPO using training workflow (#8206)
* wip
* add kl
* kl
* works now
* doc update
* reorg
* add ddppo
* add stats
* fix fetch
* comment
* fix learner stat regression
* test fixes
* fix test
|
2020-04-30 01:18:09 -07:00 |
|
Eric Liang
|
31b40b00f6
|
[rllib] Pull out experimental dsl into rllib.execution module, add initial unit tests (#7958)
|
2020-04-10 00:56:08 -07:00 |
|