ray/rllib
Jan Blumenkamp 964689b280
[RLlib] Fix bug in ModelCatalog when using custom action distribution (#12846)
* return tuple returned from _get_multi_action_distribution when using custom action dict

* Always return dst_class and required_model_output_shape in _get_multi_action_distribution

* pass model config to _get_multi_action_distribution
2021-01-25 12:42:39 +01:00
..
agents [RLlib] MAML: Add cartpole mass test for PyTorch. (#13679) 2021-01-25 12:32:41 +01:00
contrib [RLlib] Issue 9071 A3C w/ RNN not working due to VF assuming no RNN. (#13238) 2021-01-19 14:22:36 +01:00
env [RLlib] Dreamer: Fix broken import and add compilation test case. (#13553) 2021-01-21 16:30:26 +01:00
evaluation [RLlib] Support for D4RL + Semi-working CQL Benchmark (#13550) 2021-01-21 16:43:55 +01:00
examples [RLlib] MAML: Add cartpole mass test for PyTorch. (#13679) 2021-01-25 12:32:41 +01:00
execution [RLlib] Execution Annotation (#13036) 2020-12-24 09:30:33 -05:00
models [RLlib] Fix bug in ModelCatalog when using custom action distribution (#12846) 2021-01-25 12:42:39 +01:00
offline [RLlib] Support for D4RL + Semi-working CQL Benchmark (#13550) 2021-01-21 16:43:55 +01:00
policy [RLlib] Solve PyTorch/TF-eager A3C async race condition between calling model and its value function. (#13467) 2021-01-18 10:29:03 -08:00
tests [RLlib] Deprecate vf_share_layers in top-level PPO/MAML/MB-MPO configs. (#13397) 2021-01-19 09:51:35 +01:00
tuned_examples [RLlib] Support for D4RL + Semi-working CQL Benchmark (#13550) 2021-01-21 16:43:55 +01:00
utils [RLlib] Issue 9071 A3C w/ RNN not working due to VF assuming no RNN. (#13238) 2021-01-19 14:22:36 +01:00
__init__.py [RLlib] First attempt at cleaning up algo code in RLlib: PG. (#10115) 2020-08-20 17:05:57 +02:00
asv.conf.json [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
BUILD [RLlib] Dreamer: Fix broken import and add compilation test case. (#13553) 2021-01-21 16:30:26 +01:00
README.md [docs] Move all /latest links to /master (#11897) 2020-11-10 10:53:28 -08:00
rollout.py [RLlib] rollout batch, handle rewards that are None (unknown) in a multi-agent env (#11858) (#11911) 2020-11-25 13:39:22 +01:00
scripts.py [RLlib] Deprecate old classes, methods, functions, config keys (in prep for RLlib 1.0). (#10544) 2020-09-06 10:58:00 +02:00
train.py [tune] verbosity refactor second attempt (#12571) 2020-12-04 13:56:26 -08:00

RLlib: Scalable Reinforcement Learning

RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.

For an overview of RLlib, see the documentation.

If you've found RLlib useful for your research, you can cite the paper as follows:

@inproceedings{liang2018rllib,
    Author = {Eric Liang and
              Richard Liaw and
              Robert Nishihara and
              Philipp Moritz and
              Roy Fox and
              Ken Goldberg and
              Joseph E. Gonzalez and
              Michael I. Jordan and
              Ion Stoica},
    Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2018}
}

Development Install

You can develop RLlib locally without needing to compile Ray by using the setup-dev.py script. This sets up links between the rllib dir in your git repo and the one bundled with the ray package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on master and have the latest wheel installed.)