ray/python/ray/rllib
2019-07-25 11:02:53 -07:00
..
agents [rllib] ModelV2 support for pytorch (#5249) 2019-07-25 11:02:53 -07:00
contrib [rllib] Rename Agent to Trainer (#4556) 2019-04-07 00:36:18 -07:00
env [rllib] Rename PolicyEvaluator => RolloutWorker (#4820) 2019-06-03 06:49:24 +08:00
evaluation [rllib] Port DDPG to the build_tf_policy pattern (#5242) 2019-07-24 13:55:55 -07:00
examples [rllib] ModelV2 support for pytorch (#5249) 2019-07-25 11:02:53 -07:00
models [rllib] ModelV2 support for pytorch (#5249) 2019-07-25 11:02:53 -07:00
offline [rllib] Rename PolicyEvaluator => RolloutWorker (#4820) 2019-06-03 06:49:24 +08:00
optimizers Removed the implicit sync barrier at the end of each training iteration (#5217) 2019-07-18 22:59:52 -07:00
policy [rllib] ModelV2 support for pytorch (#5249) 2019-07-25 11:02:53 -07:00
tests [rllib] ModelV2 support for pytorch (#5249) 2019-07-25 11:02:53 -07:00
tuned_examples [rllib] Port IMPALA to ModelV2/build_tf_policy (#5130) 2019-07-07 15:06:41 -07:00
utils [rllib] Make RLLib handle zero-length observation arrays (#5208) 2019-07-16 22:37:57 -07:00
__init__.py [rllib] Rename PolicyEvaluator => RolloutWorker (#4820) 2019-06-03 06:49:24 +08:00
asv.conf.json [rllib][asv] Support ASV for RLlib (#2304) 2018-06-28 17:20:09 -07:00
keras_policy.py [rllib] Rename PolicyGraph => Policy, move from evaluation/ to policy/ (#4819) 2019-05-20 16:46:05 -07:00
README.md [rllib] Report sampler performance metrics (#4427) 2019-03-27 13:24:23 -07:00
rollout.py [rllib] Fix rollout.py with tuple action space (#5201) 2019-07-16 10:52:35 -07:00
scripts.py [docs] Switch docs to use rllib train instead of train.py 2018-12-04 17:36:06 -08:00
train.py [hotfix] fix backward compat with older yaml libraries 2019-07-06 20:41:28 -07:00

RLlib: Scalable Reinforcement Learning

RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.

For an overview of RLlib, see the documentation.

If you've found RLlib useful for your research, you can cite the paper as follows:

@inproceedings{liang2018rllib,
    Author = {Eric Liang and
              Richard Liaw and
              Robert Nishihara and
              Philipp Moritz and
              Roy Fox and
              Ken Goldberg and
              Joseph E. Gonzalez and
              Michael I. Jordan and
              Ion Stoica},
    Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2018}
}