ray/rllib/agents/sac
2020-06-10 15:41:59 +02:00
..
tests [rllib] Deprecate policy optimizers (#8345) 2020-05-21 10:16:18 -07:00
__init__.py [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00
README.md [RLlib] Attention Net integration into ModelV2 and learning RL example. (#8371) 2020-05-18 17:26:40 +02:00
sac.py [RLlib] Auto-framework, retire use_pytorch in favor of framework=... (#8520) 2020-05-27 16:19:13 +02:00
sac_tf_model.py [RLlib] DQN and SAC Atari benchmark fixes. (#7962) 2020-04-17 08:49:15 +02:00
sac_tf_policy.py [RLlib] Auto-framework, retire use_pytorch in favor of framework=... (#8520) 2020-05-27 16:19:13 +02:00
sac_torch_model.py [Testing] Fix LINT/sphinx errors. (#8874) 2020-06-10 15:41:59 +02:00
sac_torch_policy.py [RLlib] SAC Torch (incl. Atari learning) (#7984) 2020-04-15 13:25:16 +02:00

Implementation of the Soft Actor-Critic algorithm:

[1] Soft Actor-Critic Algorithms and Applications - T. Haarnoja, A. Zhou, K. Hartikainen, et al. https://arxiv.org/abs/1812.05905.pdf

For supporting discrete action spaces, we implemented this patch on top of the original algorithm: [2] Soft Actor-Critic for Discrete Action Settings - Petros Christodoulou https://arxiv.org/pdf/1910.07207v2.pdf