ray/rllib/agents/cql
2022-05-13 15:05:05 -07:00
..
tests [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
__init__.py [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
cql.py [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
cql_tf_policy.py [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
cql_torch_policy.py [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00
README.md [Hotfix] Unbreak lint in master (#24794) 2022-05-13 15:05:05 -07:00

Conservative Q-Learning (CQL)

Overview

CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).

Documentation & Implementation:

Conservative Q-Learning (CQL).

Detailed Documentation

Implementation