ray/rllib/evaluation
2021-07-01 19:17:54 -07:00
..
collectors [RLlib] Re-do: Trainer: Support add and delete Policies. (#16569) 2021-06-21 13:46:01 +02:00
tests [RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531) 2021-06-30 12:32:11 +02:00
__init__.py [RLlib] Sample batch docs and cleanup. (#8778) 2020-06-04 22:47:32 +02:00
episode.py [RLlib] Re-do: Trainer: Support add and delete Policies. (#16569) 2021-06-21 13:46:01 +02:00
metrics.py [RLlib] Handle array custom metrics correctly in evaluate (#15190) 2021-05-04 13:25:28 +02:00
observation_function.py [RLlib] Fix two test cases that only fail on Travis. (#11435) 2020-10-16 13:53:30 -05:00
postprocessing.py [RLlib] Support native tf.keras.Models (part 2) - Default keras models for Vision/RNN/Attention. (#15273) 2021-04-30 19:26:30 +02:00
rollout_metrics.py [RLLib] Episode media logging support (#14767) 2021-03-19 09:17:09 +01:00
rollout_worker.py [Rllib] Torch Backwards Compatibility (#16813) 2021-07-01 19:17:54 -07:00
sample_batch_builder.py [RLlib] simple_optimizer should not be used by default for tf+MA. (#15365) 2021-05-10 16:10:44 +02:00
sampler.py [RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531) 2021-06-30 12:32:11 +02:00
worker_set.py [rllib] d4rl: fix for paths with multiple periods (#16721) 2021-07-01 18:35:50 -07:00