ray/rllib/agents/cql
Sven Mika 3052193c9e
[RLlib] Fix CQL getting stuck when deprecated timesteps_per_iteration is used (use min_train_timesteps_per_reporting instead). (#24345)
Fix CQL getting stuck when deprecated timesteps_per_iteration is used (use min_train_timesteps_per_reporting instead).

CQL does not perform sampling timesteps and the deprecated timesteps_per_iteration is automatically translated into the new min_sample_timesteps_per_reporting, but should be translated (only for CQL and other purely offline RL algos) into min_train_timesteps_per_reporting.

If timesteps_per_iteration, CQL lever leaves the first iteration as it thinks it's not done yet (sample timesteps always remain at 0).
2022-04-29 21:02:34 +01:00
..
tests [RLlib] CQL: training iteration function. (#24166) 2022-04-26 14:28:39 +02:00
__init__.py [RLlib] New Offline RL Algorithm: CQL (based on SAC) (#13118) 2020-12-30 10:11:57 -05:00
cql.py [RLlib] Fix CQL getting stuck when deprecated timesteps_per_iteration is used (use min_train_timesteps_per_reporting instead). (#24345) 2022-04-29 21:02:34 +01:00
cql_tf_policy.py [RLlib] Issue 22693: RNN-SAC fixes. (#23814) 2022-04-25 09:19:24 +02:00
cql_torch_policy.py [RLlib] Issue 22693: RNN-SAC fixes. (#23814) 2022-04-25 09:19:24 +02:00
README.md [RLlib] CQL Documentation + Tests (#14531) 2021-03-11 18:51:39 +01:00

Conservative Q-Learning (CQL)

Overview

CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).

Documentation & Implementation:

Conservative Q-Learning (CQL).

Detailed Documentation

Implementation