ray/rllib
Sven Mika 0db2046b0a
[RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124)
* Exploration API (+EpsilonGreedy sub-class).

* Exploration API (+EpsilonGreedy sub-class).

* Cleanup/LINT.

* Add `deterministic` to generic Trainer config (NOTE: this is still ignored by most Agents).

* Add `error` option to deprecation_warning().

* WIP.

* Bug fix: Get exploration-info for tf framework.
Bug fix: Properly deprecate some DQN config keys.

* WIP.

* LINT.

* WIP.

* Split PerWorkerEpsilonGreedy out of EpsilonGreedy.
Docstrings.

* Fix bug in sampler.py in case Policy has self.exploration = None

* Update rllib/agents/dqn/dqn.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* WIP.

* Update rllib/agents/trainer.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* WIP.

* Change requests.

* LINT

* In tune/utils/util.py::deep_update() Only keep deep_updat'ing if both original and value are dicts. If value is not a dict, set

* Completely obsolete syn_replay_optimizer.py's parameters schedule_max_timesteps AND beta_annealing_fraction (replaced with prioritized_replay_beta_annealing_timesteps).

* Update rllib/evaluation/worker_set.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Review fixes.

* Fix default value for DQN's exploration spec.

* LINT

* Fix recursion bug (wrong parent c'tor).

* Do not pass timestep to get_exploration_info.

* Update tf_policy.py

* Fix some remaining issues with test cases and remove more deprecated DQN/APEX exploration configs.

* Bug fix tf-action-dist

* DDPG incompatibility bug fix with new DQN exploration handling (which is imported by DDPG).

* Switch off exploration when getting action probs from off-policy-estimator's policy.

* LINT

* Fix test_checkpoint_restore.py.

* Deprecate all SAC exploration (unused) configs.

* Properly use `model.last_output()` everywhere. Instead of `model._last_output`.

* WIP.

* Take out set_epsilon from multi-agent-env test (not needed, decays anyway).

* WIP.

* Trigger re-test (flaky checkpoint-restore test).

* WIP.

* WIP.

* Add test case for deterministic action sampling in PPO.

* bug fix.

* Added deterministic test cases for different Agents.

* Fix problem with TupleActions in dynamic-tf-policy.

* Separate supported_spaces tests so they can be run separately for easier debugging.

* LINT.

* Fix autoregressive_action_dist.py test case.

* Re-test.

* Fix.

* Remove duplicate py_test rule from bazel.

* LINT.

* WIP.

* WIP.

* SAC fix.

* SAC fix.

* WIP.

* WIP.

* WIP.

* FIX 2 examples tests.

* WIP.

* WIP.

* WIP.

* WIP.

* WIP.

* Fix.

* LINT.

* Renamed test file.

* WIP.

* Add unittest.main.

* Make action_dist_class mandatory.

* fix

* FIX.

* WIP.

* WIP.

* Fix.

* Fix.

* Fix explorations test case (contextlib cannot find its own nullcontext??).

* Force torch to be installed for QMIX.

* LINT.

* Fix determine_tests_to_run.py.

* Fix determine_tests_to_run.py.

* WIP

* Add Random exploration component to tests (fixed issue with "static-graph randomness" via py_function).

* Add Random exploration component to tests (fixed issue with "static-graph randomness" via py_function).

* Rename some stuff.

* Rename some stuff.

* WIP.

* WIP.

* Fix SAC.

* Fix SAC.

* Fix strange tf-error in ray core tests.

* Fix strange ray-core tf-error in test_memory_scheduling test case.

* Fix test_io.py.

* LINT.

* Update SAC yaml files' config.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2020-02-22 14:19:49 -08:00
..
agents [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
contrib [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
env Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
evaluation [rllib] [experimental] custom RL training pipelines (PG_pl, A2C_pl) (#7213) 2020-02-19 16:07:37 -08:00
examples [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
models [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
offline [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
optimizers [rllib] Fix bad sample count assert 2020-02-15 17:22:23 -08:00
policy [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
tests [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
tuned_examples [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
utils [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
__init__.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
asv.conf.json [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
BUILD [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
README.md MADDPG implementation in RLlib (#5348) 2019-08-06 16:22:06 -07:00
rollout.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
scripts.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
train.py [RLlib] Move all jenkins RLlib-tests into bazel (rllib/BUILD). (#7178) 2020-02-15 14:50:44 -08:00

RLlib: Scalable Reinforcement Learning

RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.

For an overview of RLlib, see the documentation.

If you've found RLlib useful for your research, you can cite the paper as follows:

@inproceedings{liang2018rllib,
    Author = {Eric Liang and
              Richard Liaw and
              Robert Nishihara and
              Philipp Moritz and
              Roy Fox and
              Ken Goldberg and
              Joseph E. Gonzalez and
              Michael I. Jordan and
              Ion Stoica},
    Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2018}
}

Development Install

You can develop RLlib locally without needing to compile Ray by using the setup-dev.py script. This sets up links between the rllib dir in your git repo and the one bundled with the ray package. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on master and have the latest wheel installed.)