ray/rllib/agents
2019-11-14 12:14:00 -08:00
..
a3c [rllib] Add microbatch optimizer with A2C example (#6161) 2019-11-14 12:14:00 -08:00
ars [rllib] Autoregressive action distributions (#5304) 2019-08-10 14:05:12 -07:00
ddpg [rllib] Revert [rllib] Port DDPG to the build_tf_policy pattern (#5626) 2019-09-04 21:39:22 -07:00
dqn [rllib] Don't use flat weights in non-eager mode (#6001) 2019-10-31 15:16:02 -07:00
es [rllib] Autoregressive action distributions (#5304) 2019-08-10 14:05:12 -07:00
impala [rllib] Adds eager support with a generic TFEagerPolicy class (#5436) 2019-08-23 14:21:11 +08:00
marwil [rllib] Autoregressive action distributions (#5304) 2019-08-10 14:05:12 -07:00
pg [rllib] Adds eager support with a generic TFEagerPolicy class (#5436) 2019-08-23 14:21:11 +08:00
ppo [rllib] Don't use flat weights in non-eager mode (#6001) 2019-10-31 15:16:02 -07:00
qmix rllib: use pytorch's fn to see if gpu is available (#5890) 2019-10-12 00:13:00 -07:00
sac [rllib] Don't use flat weights in non-eager mode (#6001) 2019-10-31 15:16:02 -07:00
__init__.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
agent.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
mock.py Ray, Tune, and RLlib support for memory, object_store_memory options (#5226) 2019-08-21 23:01:10 -07:00
registry.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
trainer.py Reduce RLlib log verbosity (#6154) 2019-11-13 18:50:45 -08:00
trainer_template.py [rllib] Adds eager support with a generic TFEagerPolicy class (#5436) 2019-08-23 14:21:11 +08:00