ray/rllib/tuned_examples/regression_tests
Sven Mika 428516056a
[RLlib] SAC Torch (incl. Atari learning) (#7984)
* Policy-classes cleanup and torch/tf unification.
- Make Policy abstract.
- Add `action_dist` to call to `extra_action_out_fn` (necessary for PPO torch).
- Move some methods and vars to base Policy
  (from TFPolicy): num_state_tensors, ACTION_PROB, ACTION_LOGP and some more.

* Fix `clip_action` import from Policy (should probably be moved into utils altogether).

* - Move `is_recurrent()` and `num_state_tensors()` into TFPolicy (from DynamicTFPolicy).
- Add config to all Policy c'tor calls (as 3rd arg after obs and action spaces).

* Add `config` to c'tor call to TFPolicy.

* Add missing `config` to c'tor call to TFPolicy in marvil_policy.py.

* Fix test_rollout_worker.py::MockPolicy and BadPolicy classes (Policy base class is now abstract).

* Fix LINT errors in Policy classes.

* Implement StatefulPolicy abstract methods in test cases: test_multi_agent_env.py.

* policy.py LINT errors.

* Create a simple TestPolicy to sub-class from when testing Policies (reduces code in some test cases).

* policy.py
- Remove abstractmethod from `apply_gradients` and `compute_gradients` (these are not required iff `learn_on_batch` implemented).
- Fix docstring of `num_state_tensors`.

* Make QMIX torch Policy a child of TorchPolicy (instead of Policy).

* QMixPolicy add empty implementations of abstract Policy methods.

* Store Policy's config in self.config in base Policy c'tor.

* - Make only compute_actions in base Policy's an abstractmethod and provide pass
implementation to all other methods if not defined.
- Fix state_batches=None (most Policies don't have internal states).

* Cartpole tf learning.

* Cartpole tf AND torch learning (in ~ same ts).

* Cartpole tf AND torch learning (in ~ same ts). 2

* Cartpole tf (torch syntax-broken) learning (in ~ same ts). 3

* Cartpole tf AND torch learning (in ~ same ts). 4

* Cartpole tf AND torch learning (in ~ same ts). 5

* Cartpole tf AND torch learning (in ~ same ts). 6

* Cartpole tf AND torch learning (in ~ same ts). Pendulum tf learning.

* WIP.

* WIP.

* SAC torch learning Pendulum.

* WIP.

* SAC torch and tf learning Pendulum and Cartpole after cleanup.

* WIP.

* LINT.

* LINT.

* SAC: Move policy.target_model to policy.device as well.

* Fixes and cleanup.

* Fix data-format of tf keras Conv2d layers (broken for some tf-versions which have data_format="channels_first" as default).

* Fixes and LINT.

* Fixes and LINT.

* Fix and LINT.

* WIP.

* Test fixes and LINT.

* Fixes and LINT.

Co-authored-by: Sven Mika <sven@Svens-MacBook-Pro.local>
2020-04-15 13:25:16 +02:00
..
cartpole-a2c-microbatch.yaml [rllib] Add microbatch optimizer with A2C example (#6161) 2019-11-14 12:14:00 -08:00
cartpole-a2c-torch.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
cartpole-a3c.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
cartpole-appo-vtrace.yaml [rllib] Rename sample_batch_size => rollout_fragment_length (#7503) 2020-03-14 12:05:04 -07:00
cartpole-appo.yaml [rllib] Rename sample_batch_size => rollout_fragment_length (#7503) 2020-03-14 12:05:04 -07:00
cartpole-ars.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
cartpole-ddppo.yaml [rllib] Add Decentralized DDPPO trainer and documentation (#7088) 2020-02-10 15:28:27 -08:00
cartpole-dqn-tf-param-noise.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
cartpole-dqn-tf.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
cartpole-dqn-torch-param-noise.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
cartpole-dqn-torch.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
cartpole-es.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
cartpole-pg-tf.yaml [RLlib] Add PG torch regression test (#6828) 2020-01-18 15:57:12 -08:00
cartpole-pg-torch.yaml [RLlib] Add PG torch regression test (#6828) 2020-01-18 15:57:12 -08:00
cartpole-ppo-tf.yaml [RLlib] PPO(torch) on CartPole not tuned well enough for consistent learning (#7556) 2020-03-11 20:31:27 -07:00
cartpole-ppo-torch.yaml [RLlib] PPO(torch) on CartPole not tuned well enough for consistent learning (#7556) 2020-03-11 20:31:27 -07:00
cartpole-sac-tf.yaml [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
cartpole-sac-torch.yaml [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
cartpole-simpleq-tf.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
cartpole-simpleq-torch.yaml [RLlib] DQN torch version. (#7597) 2020-04-06 11:56:16 -07:00
pendulum-appo-vtrace.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
pendulum-ddpg.yaml [RLlib] DDPG refactor and Exploration API action noise classes. (#7314) 2020-03-01 11:53:35 -08:00
pendulum-ppo.yaml [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
pendulum-sac-tf.yaml [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
pendulum-sac-torch.yaml [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
pendulum-td3.yaml [RLlib] DDPG refactor and Exploration API action noise classes. (#7314) 2020-03-01 11:53:35 -08:00