mirror of
https://github.com/vale981/ray
synced 2025-03-10 13:26:39 -04:00
![]() Fix CQL getting stuck when deprecated timesteps_per_iteration is used (use min_train_timesteps_per_reporting instead). CQL does not perform sampling timesteps and the deprecated timesteps_per_iteration is automatically translated into the new min_sample_timesteps_per_reporting, but should be translated (only for CQL and other purely offline RL algos) into min_train_timesteps_per_reporting. If timesteps_per_iteration, CQL lever leaves the first iteration as it thinks it's not done yet (sample timesteps always remain at 0). |
||
---|---|---|
.. | ||
tests | ||
__init__.py | ||
cql.py | ||
cql_tf_policy.py | ||
cql_torch_policy.py | ||
README.md |
Conservative Q-Learning (CQL)
Overview
CQL is an offline RL algorithm that mitigates the overestimation of Q-values outside the dataset distribution via convservative critic estimates. CQL does this by adding a simple Q regularizer loss to the standard Belman update loss. This ensures that the critic does not output overly-optimistic Q-values and can be added on top of any off-policy Q-learning algorithm (in this case, we use SAC).
Documentation & Implementation:
Conservative Q-Learning (CQL).