mirror of
https://github.com/vale981/ray
synced 2025-03-06 10:31:39 -05:00
![]() * Closes #11924 Formerly, rollout.py would only load environments from gym (with gym.make() ) , if an agent without workers is employed (such as ES or ARS). This will result in an error, if a custom environment is used. This PR adds the possibility to load environments from the ray registry, while maintaining the support for gym environments. * Update rllib/rollout.py Co-authored-by: Sven Mika <sven@anyscale.io> |
||
---|---|---|
.. | ||
agents | ||
contrib | ||
env | ||
evaluation | ||
examples | ||
execution | ||
models | ||
offline | ||
policy | ||
tests | ||
tuned_examples | ||
utils | ||
__init__.py | ||
asv.conf.json | ||
BUILD | ||
README.md | ||
rollout.py | ||
scripts.py | ||
train.py |
RLlib: Scalable Reinforcement Learning
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.
For an overview of RLlib, see the documentation.
If you've found RLlib useful for your research, you can cite the paper as follows:
@inproceedings{liang2018rllib,
Author = {Eric Liang and
Richard Liaw and
Robert Nishihara and
Philipp Moritz and
Roy Fox and
Ken Goldberg and
Joseph E. Gonzalez and
Michael I. Jordan and
Ion Stoica},
Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
Booktitle = {International Conference on Machine Learning ({ICML})},
Year = {2018}
}
Development Install
You can develop RLlib locally without needing to compile Ray by using the setup-dev.py script. This sets up links between the rllib
dir in your git repo and the one bundled with the ray
package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on master and have the latest wheel installed.)