ray/rllib/algorithms/cql
2022-06-10 17:09:18 +02:00
..
tests [RLlib]: Doubly Robust Off-Policy Evaluation. (#25056) 2022-06-07 12:52:19 +02:00
__init__.py [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
cql.py [RLlib] Trainer.training_iteration -> Trainer.training_step; Iterations vs reportings: Clarification of terms. (#25076) 2022-06-10 17:09:18 +02:00
cql_tf_policy.py Clean up docstyle in python modules and add LINT rule (#25272) 2022-06-01 11:27:54 -07:00
cql_torch_policy.py [RLlib] SAC, RNNSAC, and CQL TrainerConfig objects (#25059) 2022-05-22 19:58:47 +02:00
README.md [RLlib] Retry agents -> algorithms. with proper doc changes this time. (#24797) 2022-05-16 09:45:32 +02:00

Conservative Q-Learning (CQL)

Overview

CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).

Documentation & Implementation:

Conservative Q-Learning (CQL).

Detailed Documentation

Implementation