ray/rllib/agents/sac
2020-05-08 16:31:31 +02:00
..
tests [RLlib] Add light-weight Trainer.compute_action() tests for all Algos. (#8356) 2020-05-08 16:31:31 +02:00
__init__.py [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
README.md [RLlib] SAC add discrete action support. (#7320) 2020-03-06 10:37:12 -08:00
sac.py [RLlib] DQN and SAC Atari benchmark fixes. (#7962) 2020-04-17 08:49:15 +02:00
sac_tf_model.py [RLlib] DQN and SAC Atari benchmark fixes. (#7962) 2020-04-17 08:49:15 +02:00
sac_tf_policy.py [RLlib] Beta distribution. (#8229) 2020-04-30 11:09:33 -07:00
sac_torch_model.py [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
sac_torch_policy.py [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00

Implementation of the Soft Actor-Critic algorithm:

[1] Soft Actor-Critic Algorithms and Applications - T. Haarnoja, A. Zhou, K. Hartikainen, et. al https://arxiv.org/abs/1812.05905.pdf

For supporting discrete action spaces, we implemented this patch on top of the original algorithm: [2] Soft Actor-Critic for Discrete Action Settings - Petros Christodoulou https://arxiv.org/pdf/1910.07207v2.pdf