Sven Mika
|
732197e23a
|
[RLlib] Multi-GPU for tf-DQN/PG/A2C. (#13393)
|
2021-03-08 15:41:27 +01:00 |
|
Sven Mika
|
8000258333
|
[RLlib] R2D2 Implementation. (#13933)
|
2021-02-25 12:18:11 +01:00 |
|
Sven Mika
|
775e685531
|
[RLlib] Issue #13824: compress_observations=True crashes for all algos not using a replay buffer. (#14034)
|
2021-02-18 21:36:32 +01:00 |
|
Sven Mika
|
19c8033df2
|
[RLlib] Fix most remaining RLlib algos for running with trajectory view API. (#12366)
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* LINT and fixes.
MB-MPO and MAML not working yet.
* wip
* update
* update
* rmeove
* remove dep
* higher
* Update requirements_rllib.txt
* Update requirements_rllib.txt
* relpos
* no mbmpo
Co-authored-by: Eric Liang <ekhliang@gmail.com>
|
2020-12-01 17:41:10 -08:00 |
|
Sven Mika
|
b6b54f1c81
|
[RLlib] Trajectory view API: enable by default for SAC, DDPG, DQN, SimpleQ (#11827)
|
2020-11-16 10:54:35 -08:00 |
|
desktable
|
4ccfd07a61
|
[RLlib] Add docstrings for agents/dqn (#10710)
|
2020-09-15 12:37:07 +02:00 |
|
desktable
|
799318d7d7
|
[RLlib] Add type annotations for agents/dqn (#10626)
|
2020-09-09 18:55:26 +02:00 |
|
Eric Liang
|
34bae27ac7
|
[rllib] Flexible multi-agent replay modes and replay_sequence_length (#8893)
|
2020-06-12 20:17:27 -07:00 |
|
Sven Mika
|
2746fc0476
|
[RLlib] Auto-framework, retire use_pytorch in favor of framework=... (#8520)
|
2020-05-27 16:19:13 +02:00 |
|
Eric Liang
|
9a83908c46
|
[rllib] Deprecate policy optimizers (#8345)
|
2020-05-21 10:16:18 -07:00 |
|
Eric Liang
|
b14cc16616
|
[rllib] Enable functional execution workflow API by default (#8221)
|
2020-05-05 12:36:42 -07:00 |
|
Sven Mika
|
22ccc43670
|
[RLlib] DQN torch version. (#7597)
* Fix.
* Rollback.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* WIP.
* Fix.
* Fix.
* Fix.
* Fix.
* Fix.
* WIP.
* WIP.
* Fix.
* Test case fixes.
* Test case fixes and LINT.
* Test case fixes and LINT.
* Rollback.
* WIP.
* WIP.
* Test case fixes.
* Fix.
* Fix.
* Fix.
* Add regression test for DQN w/ param noise.
* Fixes and LINT.
* Fixes and LINT.
* Fixes and LINT.
* Fixes and LINT.
* Fixes and LINT.
* Comment
* Regression test case.
* WIP.
* WIP.
* LINT.
* LINT.
* WIP.
* Fix.
* Fix.
* Fix.
* LINT.
* Fix (SAC does currently not support eager).
* Fix.
* WIP.
* LINT.
* Update rllib/evaluation/sampler.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Update rllib/evaluation/sampler.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Update rllib/utils/exploration/exploration.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Update rllib/utils/exploration/exploration.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* WIP.
* WIP.
* Fix.
* LINT.
* LINT.
* Fix and LINT.
* WIP.
* WIP.
* WIP.
* WIP.
* Fix.
* LINT.
* Fix.
* Fix and LINT.
* Update rllib/utils/exploration/exploration.py
* Update rllib/policy/dynamic_tf_policy.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Update rllib/policy/dynamic_tf_policy.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Update rllib/policy/dynamic_tf_policy.py
Co-Authored-By: Eric Liang <ekhliang@gmail.com>
* Fixes.
* WIP.
* LINT.
* Fixes and LINT.
* LINT and fixes.
* LINT.
* Move action_dist back into torch extra_action_out_fn and LINT.
* Working SimpleQ learning cartpole on both torch AND tf.
* Working Rainbow learning cartpole on tf.
* Working Rainbow learning cartpole on tf.
* WIP.
* LINT.
* LINT.
* Update docs and add torch to APEX test.
* LINT.
* Fix.
* LINT.
* Fix.
* Fix.
* Fix and docstrings.
* Fix broken RLlib tests in master.
* Split BAZEL learning tests into cartpole and pendulum (reached the 60min barrier).
* Fix error_outputs option in BAZEL for RLlib regression tests.
* Fix.
* Tune param-noise tests.
* LINT.
* Fix.
* Fix.
* test
* test
* test
* Fix.
* Fix.
* WIP.
* WIP.
* WIP.
* WIP.
* LINT.
* WIP.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
|
2020-04-06 11:56:16 -07:00 |
|