ray/rllib/execution
2020-12-08 16:41:45 -08:00
..
tests [rllib] Replay buffer size inaccurate with replay_seq_len option (#10988) 2020-09-25 13:47:23 -07:00
__init__.py [rllib] Add execution module to package ref (#10941) 2020-09-21 23:03:06 -07:00
common.py [RLlib] Trajectory View API (preparatory cleanup and enhancements). (#9678) 2020-07-29 21:15:09 +02:00
concurrency_ops.py [rllib] Add execution module to package ref (#10941) 2020-09-21 23:03:06 -07:00
learner_thread.py [RLlib] SAC algo cleanup. (#10825) 2020-09-20 11:27:02 +02:00
metric_ops.py [RLlib] SAC algo cleanup. (#10825) 2020-09-20 11:27:02 +02:00
minibatch_buffer.py [RLlib] SAC algo cleanup. (#10825) 2020-09-20 11:27:02 +02:00
multi_gpu_impl.py [RLlib] Curiosity exploration module: tf/tf2.x/tf-eager support. (#11945) 2020-11-29 12:31:24 +01:00
multi_gpu_learner.py [RLlib] Trajectory view API: Enable by default for PPO, IMPALA, PG, A3C (tf and torch). (#11747) 2020-11-12 16:27:34 +01:00
replay_buffer.py [rllib] Replay buffer size inaccurate with replay_seq_len option (#10988) 2020-09-25 13:47:23 -07:00
replay_ops.py add large data warning (#10957) 2020-09-23 15:46:06 -07:00
rollout_ops.py [RLlib] Batch-size for truncate_episode batch_mode should be confgurable in agent-steps (rather than env-steps), if needed. (#12420) 2020-12-08 16:41:45 -08:00
segment_tree.py [rllib] Deprecate policy optimizers (#8345) 2020-05-21 10:16:18 -07:00
train_ops.py [RLlib] Trajectory view API: Enable by default for PPO, IMPALA, PG, A3C (tf and torch). (#11747) 2020-11-12 16:27:34 +01:00
tree_agg.py [RLlib] Batch-size for truncate_episode batch_mode should be confgurable in agent-steps (rather than env-steps), if needed. (#12420) 2020-12-08 16:41:45 -08:00