.. |
tests
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
__init__.py
|
[rllib] [experimental] Decentralized Distributed PPO for torch (DD-PPO) (#6918)
|
2020-01-25 22:36:43 -08:00 |
aso_aggregator.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
aso_learner.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
aso_minibatch_buffer.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
aso_multi_gpu_learner.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
aso_tree_aggregator.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
async_gradients_optimizer.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
async_replay_optimizer.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
async_samples_optimizer.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
microbatch_optimizer.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
multi_gpu_impl.py
|
[Core/RLlib] Move log_once from rllib to ray.util. (#7273)
|
2020-02-27 10:40:44 -08:00 |
multi_gpu_optimizer.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
policy_optimizer.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
replay_buffer.py
|
[RLlib] Fix bugs and speed up SegmentTree
|
2020-03-13 01:03:07 -07:00 |
rollout.py
|
[rllib] Rename sample_batch_size => rollout_fragment_length (#7503)
|
2020-03-14 12:05:04 -07:00 |
segment_tree.py
|
[RLlib] Fix bugs and speed up SegmentTree
|
2020-03-13 01:03:07 -07:00 |
sync_batch_replay_optimizer.py
|
Remove future imports (#6724)
|
2020-01-09 00:15:48 -08:00 |
sync_replay_optimizer.py
|
[rllib] Fix per-worker exploration in Ape-X; make more kwargs required for future safety (#7504)
|
2020-03-10 11:14:14 -07:00 |
sync_samples_optimizer.py
|
[rllib] [experimental] Decentralized Distributed PPO for torch (DD-PPO) (#6918)
|
2020-01-25 22:36:43 -08:00 |
torch_distributed_data_parallel_optimizer.py
|
[rllib] Add Decentralized DDPPO trainer and documentation (#7088)
|
2020-02-10 15:28:27 -08:00 |