ray/rllib/agents/sac
2021-08-18 17:21:01 +02:00
..
tests [RLlib] Torch algos use now-framework-agnostic MultiGPUTrainOneStep execution op (~33% speedup for PPO-torch + GPU). (#17371) 2021-08-03 11:35:49 -04:00
__init__.py [RLlib] Add RNN-SAC agent (#16577) 2021-07-25 10:04:52 -04:00
README.md [RLlib] Improved Documentation for PPO, DDPG, and SAC (#12943) 2020-12-24 09:31:35 -05:00
rnnsac.py [RLlib] Add RNN-SAC agent (#16577) 2021-07-25 10:04:52 -04:00
rnnsac_torch_model.py [RLlib] Add RNN-SAC agent (#16577) 2021-07-25 10:04:52 -04:00
rnnsac_torch_policy.py [RLlib] Add multi-GPU learning tests to nightly. (#17778) 2021-08-18 17:21:01 +02:00
sac.py [RLlib] De-flake 3 test cases; Fix config.simple_optimizer and SampleBatch.is_training warnings. (#17321) 2021-07-27 14:39:06 -04:00
sac_tf_model.py [RLlib] SAC tuple observation space fix (#17356) 2021-07-28 12:39:28 -04:00
sac_tf_policy.py [RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531) 2021-06-30 12:32:11 +02:00
sac_torch_model.py [RLlib] Torch algos use now-framework-agnostic MultiGPUTrainOneStep execution op (~33% speedup for PPO-torch + GPU). (#17371) 2021-08-03 11:35:49 -04:00
sac_torch_policy.py [RLlib] Add multi-GPU learning tests to nightly. (#17778) 2021-08-18 17:21:01 +02:00

Soft Actor Critic (SAC)

Overview

SAC is a SOTA model-free off-policy RL algorithm that performs remarkably well on continuous-control domains. SAC employs an actor-critic framework and combats high sample complexity and training stability via learning based on a maximum-entropy framework. Unlike the standard RL objective which aims to maximize sum of reward into the future, SAC seeks to optimize sum of rewards as well as expected entropy over the current policy. In addition to optimizing over an actor and critic with entropy-based objectives, SAC also optimizes for the entropy coeffcient.

Documentation & Implementation:

Soft Actor-Critic Algorithm (SAC) with also discrete-action support.

Detailed Documentation

Implementation