mirror of
https://github.com/vale981/ray
synced 2025-03-05 10:01:43 -05:00
737 B
737 B
Conservative Q-Learning (CQL)
Overview
CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).
Documentation & Implementation:
Conservative Q-Learning (CQL).