ray/rllib/contrib
2022-01-25 14:16:58 +01:00
..
alpha_zero [RLlib] Sub-class Trainer (instead of build_trainer()): All remaining classes; soft-deprecate build_trainer. (#20725) 2021-12-04 22:05:26 +01:00
bandits [RLlib] Sub-class Trainer (instead of build_trainer()): All remaining classes; soft-deprecate build_trainer. (#20725) 2021-12-04 22:05:26 +01:00
maddpg [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2022-01-25 14:16:58 +01:00
random_agent [RLlib] Sub-class Trainer (instead of build_trainer()): All remaining classes; soft-deprecate build_trainer. (#20725) 2021-12-04 22:05:26 +01:00
sumo [Lint] Add flake8-bugbear (#19053) 2021-10-03 23:24:11 -07:00
__init__.py [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
README.rst [docs] Move all /latest links to /master (#11897) 2020-11-10 10:53:28 -08:00
registry.py [RLlib] Trainer sub-class PPO/DDPPO (instead of build_trainer()). (#20571) 2021-11-23 23:01:05 +01:00

Contributed algorithms, which can be run via ``rllib train --run=contrib/<alg_name>``

See https://docs.ray.io/en/master/rllib-dev.html for guidelines.