ray/rllib
2021-05-12 12:16:00 +02:00
..
agents [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
contrib [RLlib] contrib/MADDPG: Make get_weights and set_weights use dictionaries rather than lists. (#14903) 2021-05-04 13:26:39 +02:00
env [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
evaluation [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
examples [RLlib] Example script for restoring 1 agent (out of n) from a checkpoint (multi-agent). (#15540) 2021-05-10 16:09:05 +02:00
execution [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
models [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
offline [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
policy [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
tests [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
tuned_examples [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
utils [RLlib] CQL loss fn fixes, MuJoCo + Pendulum benchmarks, offline-RL example script w/ json file. (#15603) 2021-05-04 19:06:19 +02:00
__init__.py [RLlib] Allow rllib rollout to run distributed via evaluation workers. (#13718) 2021-02-08 12:05:16 +01:00
asv.conf.json [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
BUILD [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
README.md [docs] Move all /latest links to /master (#11897) 2020-11-10 10:53:28 -08:00
rollout.py [RLlib] Trainer._evaluate -> Trainer.evaluate; Also make evaluation possible w/o evaluation worker set. (#15591) 2021-05-12 12:16:00 +02:00
scripts.py [tune] Add leading zeros to checkpoint directory (#14152) 2021-03-01 12:12:19 +01:00
train.py [Core] Adds deprecation decorator and fixes privatization of a few APIs. (#14811) 2021-03-22 10:31:50 -07:00

RLlib: Scalable Reinforcement Learning

RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.

For an overview of RLlib, see the documentation.

If you've found RLlib useful for your research, you can cite the paper as follows:

@inproceedings{liang2018rllib,
    Author = {Eric Liang and
              Richard Liaw and
              Robert Nishihara and
              Philipp Moritz and
              Roy Fox and
              Ken Goldberg and
              Joseph E. Gonzalez and
              Michael I. Jordan and
              Ion Stoica},
    Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2018}
}

Development Install

You can develop RLlib locally without needing to compile Ray by using the setup-dev.py script. This sets up links between the rllib dir in your git repo and the one bundled with the ray package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on master and have the latest wheel installed.)