Commit graph

14 commits

Author SHA1 Message Date
Sven Mika
62dbf26394
[RLlib] POC: Run PGTrainer w/o the distr. exec API (Trainer's new training_iteration method). (#20984) 2021-12-21 08:39:05 +01:00
Sven Mika
db058d0fb3
[RLlib] Rename metrics_smoothing_episodes into metrics_num_episodes_for_smoothing for clarity. (#20983) 2021-12-11 20:33:35 +01:00
Sven Mika
b4790900f5
[RLlib] Sub-class Trainer (instead of build_trainer()): All remaining classes; soft-deprecate build_trainer. (#20725) 2021-12-04 22:05:26 +01:00
Sven Mika
0de41e4a6b
[RLlib] Trainer sub-class QMIX/MAML/MB-MPO (instead of build_trainer). (#20639) 2021-12-02 13:17:10 +01:00
gjoliver
99a0088233
[RLlib] Unify the way we create local replay buffer for all agents (#19627)
* [RLlib] Unify the way we create and use LocalReplayBuffer for all the agents.

This change
1. Get rid of the try...except clause when we call execution_plan(),
   and get rid of the Deprecation warning as a result.
2. Fix the execution_plan() call in Trainer._try_recover() too.
3. Most importantly, makes it much easier to create and use different types
   of local replay buffers for all our agents.
   E.g., allow us to easily create a reservoir sampling replay buffer for
   APPO agent for Riot in the near future.
* Introduce explicit configuration for replay buffer types.
* Fix is_training key error.
* actually deprecate buffer_size field.
2021-10-26 20:56:02 +02:00
gjoliver
9226f9bddc
[RLlib] Report timesteps_this_iter to Tune, so it can track/checkpoint/restore total timesteps trained. (#19264)
* Report timesteps_this_iter to Tune, so it can track/checkpoint/restore
total timesteps trained.

* Trigger Build

* lint
2021-10-12 16:03:41 +02:00
Sven Mika
ed85f59194
[RLlib] Unify all RLlib Trainer.train() -> results[info][learner][policy ID][learner_stats] and add structure tests. (#18879) 2021-09-30 16:39:05 +02:00
Sven Mika
732197e23a
[RLlib] Multi-GPU for tf-DQN/PG/A2C. (#13393) 2021-03-08 15:41:27 +01:00
Sven Mika
93c0a5549b
[RLlib] Deprecate vf_share_layers in top-level PPO/MAML/MB-MPO configs. (#13397) 2021-01-19 09:51:35 +01:00
Sven Mika
414041c6dd
[RLlib] Do not create env on driver iff num_workers > 0. (#11307) 2020-10-15 18:21:30 +02:00
Michael Luo
47b499d899
Cartpole MAML + Discrete (#11028) 2020-10-02 12:56:34 +02:00
Michael Luo
94fcd43593
[rllib] MAML Transform (#9463)
* MAML Transform

* Moved Inner Adapt to Method in Execution Plan
2020-07-16 11:11:33 -07:00
Michael Luo
851d02463b
[Doc] RLlib Algorithms Documentation: MAML + PyTorch MAML (#9189) 2020-07-03 11:05:15 -07:00
Michael Luo
cf0894d396
[rllib] MAML Agent (#8862)
* Halfway done with transferring MAML to new Ray

* MAML Beta Out

* Debugging MAML atm

* Distributed Execution

* Pendulum Mass Working

* All experiments complete

* Cleaned up codebase

* Travis CI

* Travis CI

* Tests

* Merged conflicts

* Fixed variance bug conflict

* Comment resolved

* Apply suggestions from code review

fixed test_maml

* Update rllib/agents/maml/tests/test_maml.py

* asdf

* Fix testing

Co-authored-by: Sven Mika <sven@anyscale.io>
2020-06-23 09:48:23 -07:00