ray/rllib/policy
2020-11-18 15:39:23 +01:00
..
tests [RLlib] Trajectory view API - 03 Fast LSTM + prev actions/rewards (#9950) 2020-08-21 12:35:16 +02:00
__init__.py [rllib] Add type annotations for evaluation/, env/ packages (#9003) 2020-06-19 13:09:05 -07:00
dynamic_tf_policy.py [RLlib] Trajectory view API: enable by default for SAC, DDPG, DQN, SimpleQ (#11827) 2020-11-16 10:54:35 -08:00
eager_tf_policy.py [RLlib] Add on_learn_on_batch (Policy) callback to DefaultCallbacks. (#12070) 2020-11-18 15:39:23 +01:00
policy.py [RLlib] Add on_learn_on_batch (Policy) callback to DefaultCallbacks. (#12070) 2020-11-18 15:39:23 +01:00
rnn_sequencing.py [RLlib] Fix RNN learning for tf-eager/tf2.x. (#11720) 2020-11-02 11:18:41 +01:00
sample_batch.py [RLlib] Fix all example scripts to run on GPUs. (#11105) 2020-10-02 23:07:44 +02:00
tf_policy.py [RLlib] Add on_learn_on_batch (Policy) callback to DefaultCallbacks. (#12070) 2020-11-18 15:39:23 +01:00
tf_policy_template.py [RLlib] Trajectory view API: Enable by default for PPO, IMPALA, PG, A3C (tf and torch). (#11747) 2020-11-12 16:27:34 +01:00
torch_policy.py [RLlib] Add on_learn_on_batch (Policy) callback to DefaultCallbacks. (#12070) 2020-11-18 15:39:23 +01:00
torch_policy_template.py [RLlib] Trajectory view API: Enable by default for PPO, IMPALA, PG, A3C (tf and torch). (#11747) 2020-11-12 16:27:34 +01:00
view_requirement.py [RLlib] Trajectory view API: Simple List Collector (on by default for PPO); LSTM-agnostic (#11056) 2020-10-01 16:57:10 +02:00