ray/rllib/agents/ppo
2019-11-18 10:39:07 -08:00
..
test Custom action distributions (#5164) 2019-08-06 11:13:16 -07:00
__init__.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
appo.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
appo_policy.py [rllib] Don't use flat weights in non-eager mode (#6001) 2019-10-31 15:16:02 -07:00
ppo.py [rllib] Reorganize trainer config, add warnings about high VF loss magnitude for PPO (#6181) 2019-11-18 10:39:07 -08:00
ppo_policy.py [rllib] Adds eager support with a generic TFEagerPolicy class (#5436) 2019-08-23 14:21:11 +08:00
utils.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00