ray/rllib
Balaji Veeramani abad268549
Comment fmt: off annotations (#21984)
Code formatting is disabled in several modules with the explanation
> [The module] ignores yapf because yapf doesn't allow comments right after code blocks,
but we put comments right after code blocks to prevent large white spaces
in the documentation.

Since we no longer use YAPF, it may be possible to re-enable code formatting on 
these modules. I've added "FIXME" comments requesting developers to check
whether code formatter appeasements are still necessary.
2022-02-09 22:12:11 -08:00
..
agents [RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246) 2022-02-10 02:58:43 +01:00
contrib Comment fmt: off annotations (#21984) 2022-02-09 22:12:11 -08:00
env [RLlib] Fix Unity3D built-in examples action bounds from -inf/inf to -1.0/1.0. (#22247) 2022-02-10 03:00:30 +01:00
evaluation [RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246) 2022-02-10 02:58:43 +01:00
examples [RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246) 2022-02-10 02:58:43 +01:00
execution Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test." (#22250) 2022-02-09 09:26:36 -08:00
models [CI] Replace YAPF disables with Black disables (#21982) 2022-02-08 16:29:25 -08:00
offline [RLlib] Request CPU resources in Trainer.default_resource_request() if using dataset input. (#21948) 2022-02-02 10:20:37 +01:00
policy Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test." (#22250) 2022-02-09 09:26:36 -08:00
tests [RLlib] Filter.clear_buffer() deprecated (use Filter.reset_buffer() instead). (#22246) 2022-02-10 02:58:43 +01:00
tuned_examples Revert "Revert "[RLlib] AlphaStar: Parallelized, multi-agent/multi-GPU learni…" (#22153) 2022-02-08 16:43:00 +01:00
utils [RLlib] Discussion 4986: OU Exploration (torch) crashes when restoring from checkpoint. (#22245) 2022-02-10 02:58:09 +01:00
__init__.py Revert "Revert "[RLlib] AlphaStar: Parallelized, multi-agent/multi-GPU learni…" (#22153) 2022-02-08 16:43:00 +01:00
asv.conf.json [rllib] Try moving RLlib to top level dir (#5324) 2019-08-05 23:25:49 -07:00
BUILD Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test." (#22250) 2022-02-09 09:26:36 -08:00
evaluate.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
README.rst [RLlib] Update a few things to get rid of the remote_vector_env deprecation warning. (#20753) 2021-12-02 13:10:44 +01:00
rollout.py [RLlib] Rename rllib rollout into rllib evaluate (backward compatible) to match Trainer API. (#18467) 2021-09-15 08:45:17 +02:00
scripts.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
train.py [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00

RLlib: Industry-Grade Reinforcement Learning with TF and Torch
==============================================================

**RLlib** is an open-source library for reinforcement learning (RL), offering support for
production-level, highly distributed RL workloads, while maintaining
unified and simple APIs for a large variety of industry applications.

Whether you would like to train your agents in multi-agent setups,
purely from offline (historic) datasets, or using externally
connected simulators, RLlib offers simple solutions for your decision making needs.

You **don't need** to be an **RL expert** to use RLlib, nor do you need to learn Ray or any
other of its libraries! If you either have your problem coded (in python) as an
`RL environment <https://medium.com/distributed-computing-with-ray/anatomy-of-a-custom-environment-for-rllib-327157f269e5>`_
or own lots of pre-recorded, historic behavioral data to learn from, you will be
up and running in only a few days.

RLlib is already used in production by industry leaders in many different verticals, such as
`climate control <https://www.anyscale.com/events/2021/06/23/applying-ray-and-rllib-to-real-life-industrial-use-cases>`_,
`manufacturing and logistics <https://www.anyscale.com/events/2021/06/22/offline-rl-with-rllib>`_,
`finance <https://www.anyscale.com/events/2021/06/22/a-24x-speedup-for-reinforcement-learning-with-rllib-+-ray>`_,
`gaming <https://www.anyscale.com/events/2021/06/22/using-reinforcement-learning-to-optimize-iap-offer-recommendations-in-mobile-games>`_,
`automobile <https://www.anyscale.com/events/2021/06/23/using-rllib-in-an-enterprise-scale-reinforcement-learning-solution>`_,
`robotics <https://www.anyscale.com/events/2021/06/23/introducing-amazon-sagemaker-kubeflow-reinforcement-learning-pipelines-for>`_,
`boat design <https://www.youtube.com/watch?v=cLCK13ryTpw>`_,
and many others.


Installation and Setup
----------------------

Install RLlib and run your first experiment on your laptop in seconds:

**TensorFlow:**

.. code-block:: bash

    $ conda create -n rllib python=3.8
    $ conda activate rllib
    $ pip install "ray[rllib]" tensorflow "gym[atari]" "gym[accept-rom-license]" atari_py
    $ # Run a test job:
    $ rllib train --run APPO --env CartPole-v0


**PyTorch:**

.. code-block:: bash

    $ conda create -n rllib python=3.8
    $ conda activate rllib
    $ pip install "ray[rllib]" torch "gym[atari]" "gym[accept-rom-license]" atari_py
    $ # Run a test job:
    $ rllib train --run APPO --env CartPole-v0 --torch


Quick First Experiment
----------------------

.. code-block:: python

    import gym
    from ray.rllib.agents.ppo import PPOTrainer


    # Define your problem using python and openAI's gym API:
    class ParrotEnv(gym.Env):
        """Environment in which an agent must learn to repeat the seen observations.

        Observations are float numbers indicating the to-be-repeated values,
        e.g. -1.0, 5.1, or 3.2.

        The action space is always the same as the observation space.

        Rewards are r=-abs(observation - action), for all steps.
        """

        def __init__(self, config):
            # Make the space (for actions and observations) configurable.
            self.action_space = config.get(
                "parrot_shriek_range", gym.spaces.Box(-1.0, 1.0, shape=(1, )))
            # Since actions should repeat observations, their spaces must be the
            # same.
            self.observation_space = self.action_space
            self.cur_obs = None
            self.episode_len = 0

        def reset(self):
            """Resets the episode and returns the initial observation of the new one.
            """
            # Reset the episode len.
            self.episode_len = 0
            # Sample a random number from our observation space.
            self.cur_obs = self.observation_space.sample()
            # Return initial observation.
            return self.cur_obs

        def step(self, action):
            """Takes a single step in the episode given `action`

            Returns:
                New observation, reward, done-flag, info-dict (empty).
            """
            # Set `done` flag after 10 steps.
            self.episode_len += 1
            done = self.episode_len >= 10
            # r = -abs(obs - action)
            reward = -sum(abs(self.cur_obs - action))
            # Set a new observation (random sample).
            self.cur_obs = self.observation_space.sample()
            return self.cur_obs, reward, done, {}


    # Create an RLlib Trainer instance to learn how to act in the above
    # environment.
    trainer = PPOTrainer(
        config={
            # Env class to use (here: our gym.Env sub-class from above).
            "env": ParrotEnv,
            # Config dict to be passed to our custom env's constructor.
            "env_config": {
                "parrot_shriek_range": gym.spaces.Box(-5.0, 5.0, (1, ))
            },
            # Parallelize environment rollouts.
            "num_workers": 3,
        })

    # Train for n iterations and report results (mean episode rewards).
    # Since we have to guess 10 times and the optimal reward is 0.0
    # (exact match between observation and action value),
    # we can expect to reach an optimal episode reward of 0.0.
    for i in range(5):
        results = trainer.train()
        print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")


After training, you may want to perform action computations (inference) in your environment.
Below is a minimal example on how to do this. Also
`check out our more detailed examples here <https://github.com/ray-project/ray/tree/master/rllib/examples/inference_and_serving>`_
(in particular for `normal models <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training.py>`_,
`LSTMs <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_lstm.py>`_,
and `attention nets <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_attention.py>`_).


.. code-block:: python

    # Perform inference (action computations) based on given env observations.
    # Note that we are using a slightly simpler env here (-3.0 to 3.0, instead
    # of -5.0 to 5.0!), however, this should still work as the agent has
    # (hopefully) learned to "just always repeat the observation!".
    env = ParrotEnv({"parrot_shriek_range": gym.spaces.Box(-3.0, 3.0, (1, ))})
    # Get the initial observation (some value between -10.0 and 10.0).
    obs = env.reset()
    done = False
    total_reward = 0.0
    # Play one episode.
    while not done:
        # Compute a single action, given the current observation
        # from the environment.
        action = trainer.compute_single_action(obs)
        # Apply the computed action in the environment.
        obs, reward, done, info = env.step(action)
        # Sum up rewards for reporting purposes.
        total_reward += reward
    # Report results.
    print(f"Shreaked for 1 episode; total-reward={total_reward}")


For a more detailed `"60 second" example, head to our main documentation  <https://docs.ray.io/en/master/rllib/index.html>`_.


Highlighted Features
--------------------

The following is a summary of RLlib's most striking features (for an in-depth overview,
check out our `documentation <http://docs.ray.io/en/master/rllib/index.html>`_):

The most **popular deep-learning frameworks**: `PyTorch <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_torch_policy.py>`_ and `TensorFlow
(tf1.x/2.x static-graph/eager/traced) <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_tf_policy.py>`_.

**Highly distributed learning**: Our RLlib algorithms (such as our "PPO" or "IMPALA")
allow you to set the ``num_workers`` config parameter, such that your workloads can run
on 100s of CPUs/nodes thus parallelizing and speeding up learning.

**Vectorized (batched) and remote (parallel) environments**: RLlib auto-vectorizes
your ``gym.Envs`` via the ``num_envs_per_worker`` config. Environment workers can
then batch and thus significantly speedup the action computing forward pass.
On top of that, RLlib offers the ``remote_worker_envs`` config to create
`single environments (within a vectorized one) as ray Actors <https://github.com/ray-project/ray/blob/master/rllib/examples/remote_base_env_with_custom_api.py>`_,
thus parallelizing even the env stepping process.

| **Multi-agent RL** (MARL): Convert your (custom) ``gym.Envs`` into a multi-agent one
  via a few simple steps and start training your agents in any of the following fashions:
| 1) Cooperative with `shared <https://github.com/ray-project/ray/blob/master/rllib/examples/centralized_critic.py>`_ or
  `separate <https://github.com/ray-project/ray/blob/master/rllib/examples/two_step_game.py>`_
  policies and/or value functions.
| 2) Adversarial scenarios using `self-play <https://github.com/ray-project/ray/blob/master/rllib/examples/self_play_with_open_spiel.py>`_
  and `league-based training <https://github.com/ray-project/ray/blob/master/rllib/examples/self_play_league_based_with_open_spiel.py>`_.
| 3) `Independent learning <https://github.com/ray-project/ray/blob/master/rllib/examples/multi_agent_independent_learning.py>`_
  of neutral/co-existing agents.


**External simulators**: Don't have your simulation running as a gym.Env in python?
No problem! RLlib supports an external environment API and comes with a pluggable,
off-the-shelve
`client <https://github.com/ray-project/ray/blob/master/rllib/examples/serving/cartpole_client.py>`_/
`server <https://github.com/ray-project/ray/blob/master/rllib/examples/serving/cartpole_server.py>`_
setup that allows you to run 100s of independent simulators on the "outside"
(e.g. a Windows cloud) connecting to a central RLlib Policy-Server that learns
and serves actions. Alternatively, actions can be computed on the client side
to save on network traffic.

**Offline RL and imitation learning/behavior cloning**: You don't have a simulator
for your particular problem, but tons of historic data recorded by a legacy (maybe
non-RL/ML) system? This branch of reinforcement learning is for you!
RLlib's comes with several `offline RL <https://github.com/ray-project/ray/blob/master/rllib/examples/offline_rl.py>`_
algorithms (*CQL*, *MARWIL*, and *DQfD*), allowing you to either purely
`behavior-clone <https://github.com/ray-project/ray/blob/master/rllib/agents/marwil/tests/test_bc.py>`_
your existing system or learn how to further improve over it.


In-Depth Documentation
----------------------

For an in-depth overview of RLlib and everything it has to offer, including
hand-on tutorials of important industry use cases and workflows, head over to
our `documentation pages <https://docs.ray.io/en/master/rllib/index.html>`_.


Cite our Paper
--------------

If you've found RLlib useful for your research, please cite our `paper <https://arxiv.org/abs/1712.09381>`_ as follows:

.. code-block::

    @inproceedings{liang2018rllib,
        Author = {Eric Liang and
                  Richard Liaw and
                  Robert Nishihara and
                  Philipp Moritz and
                  Roy Fox and
                  Ken Goldberg and
                  Joseph E. Gonzalez and
                  Michael I. Jordan and
                  Ion Stoica},
        Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
        Booktitle = {International Conference on Machine Learning ({ICML})},
        Year = {2018}
    }