ray/rllib/agents/cql
2021-09-14 19:58:10 +02:00
..
tests [RLlib] Add support for evaluation_num_episodes=auto (run eval for as long as the parallel train step takes). (#18380) 2021-09-07 08:08:37 +02:00
__init__.py [RLlib] New Offline RL Algorithm: CQL (based on SAC) (#13118) 2020-12-30 10:11:57 -05:00
cql.py [RLlib] Replay buffers: Add config option to store contents in checkpoints. (#17999) 2021-08-31 12:21:49 +02:00
cql_tf_policy.py [RLlib] Fix R2D2 (torch) multi-GPU issue. (#18550) 2021-09-14 19:58:10 +02:00
cql_torch_policy.py [RLlib] Issue 17667: CQL-torch + GPU not working (due to simple_optimizer=False; must use simple optimizer!). (#17742) 2021-08-11 18:30:21 +02:00
README.md [RLlib] CQL Documentation + Tests (#14531) 2021-03-11 18:51:39 +01:00

Conservative Q-Learning (CQL)

Overview

CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).

Documentation & Implementation:

Conservative Q-Learning (CQL).

Detailed Documentation

Implementation