ray/rllib/execution
2022-03-15 17:34:21 +01:00
..
buffers [Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128) 2022-03-15 17:34:21 +01:00
tests [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
__init__.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
common.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
concurrency_ops.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
learner_thread.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
metric_ops.py [RLlib] Neural-MMO keep_per_episode_custom_metrics patch (toward making Neuro-MMO RLlib's default massive-multi-agent learning test environment). (#22042) 2022-02-02 17:28:42 +01:00
multi_gpu_impl.py [RLlib] Tf2 + eager-tracing same speed as framework=tf; Add more test coverage for tf2+tracing. (#19981) 2021-11-05 16:10:00 +01:00
multi_gpu_learner.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
multi_gpu_learner_thread.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
parallel_requests.py Revert "Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test."" (#18708) 2022-02-10 13:44:22 +01:00
replay_ops.py [Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128) 2022-03-15 17:34:21 +01:00
rollout_ops.py [Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128) 2022-03-15 17:34:21 +01:00
segment_tree.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
train_ops.py [Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128) 2022-03-15 17:34:21 +01:00
tree_agg.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00