[RLlib] Model documentation enhancements. (#10011)

This commit is contained in:
Sven Mika 2020-08-13 13:36:40 +02:00 committed by GitHub
parent 0effcda3e4
commit 66d204e078
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 157 additions and 124 deletions

View file

@ -97,19 +97,31 @@ Multi-Agent and Hierarchical
Community Examples
------------------
- `Arena AI <https://sites.google.com/view/arena-unity/home>`__:
A General Evaluation Platform and Building Toolkit for Single/Multi-Agent Intelligence
with RLlib-generated baselines.
- `CARLA <https://github.com/layssi/Carla_Ray_Rlib>`__:
Example of training autonomous vehicles with RLlib and `CARLA <http://carla.org/>`__ simulator.
- `The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning <https://arxiv.org/pdf/2008.02616.pdf>`__:
Using Graph Neural Networks and RLlib to train multiple cooperative and adversarial agents to solve the
"cover the area"-problem, thereby learning how to best communicate (or - in the adversarial case - how to disturb communication).
- `Flatland <https://flatland.aicrowd.com/intro.html>`__:
A dense traffic simulating environment with RLlib-generated baselines.
- `GFootball <https://github.com/google-research/football/blob/master/gfootball/examples/run_multiagent_rllib.py>`__:
Example of setting up a multi-agent version of `GFootball <https://github.com/google-research>`__ with RLlib.
- `Neural MMO <https://jsuarez5341.github.io/neural-mmo/build/html/rst/userguide.html>`__:
A multiagent AI research environment inspired by Massively Multiplayer Online (MMO) role playing games
self-contained worlds featuring thousands of agents per persistent macrocosm, diverse skilling systems, local and global economies, complex emergent social structures,
and ad-hoc high-stakes single and team based conflict.
- `NeuroCuts <https://github.com/neurocuts/neurocuts>`__:
Example of building packet classification trees using RLlib / multi-agent in a bandit-like setting.
- `NeuroVectorizer <https://github.com/ucb-bar/NeuroVectorizer>`__:
Example of learning optimal LLVM vectorization compiler pragmas for loops in C and C++ codes using RLlib.
- `Roboschool / SageMaker <https://github.com/awslabs/amazon-sagemaker-examples/tree/master/reinforcement_learning/rl_roboschool_ray>`__:
Example of training robotic control policies in SageMaker with RLlib.
- `Sequential Social Dilemma Games <https://github.com/eugenevinitsky/sequential_social_dilemma_games>`__:
Example of using the multi-agent API to model several `social dilemma games <https://arxiv.org/abs/1702.03037>`__.
- `StarCraft2 <https://github.com/oxwhirl/smac>`__:
Example of training in StarCraft2 maps with RLlib / multi-agent.
- `Traffic Flow <https://berkeleyflow.readthedocs.io/en/latest/flow_setup.html>`__:
Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.
- `Sequential Social Dilemma Games <https://github.com/eugenevinitsky/sequential_social_dilemma_games>`__:
Example of using the multi-agent API to model several `social dilemma games <https://arxiv.org/abs/1702.03037>`__.

View file

@ -11,14 +11,29 @@ The components highlighted in green can be replaced with custom user-defined imp
Default Behaviours
------------------
Built-in Models and Preprocessors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Built-in Preprocessors
~~~~~~~~~~~~~~~~~~~~~~
RLlib picks default models based on a simple heuristic: a `vision network <https://github.com/ray-project/ray/blob/master/rllib/models/tf/visionnet_v1.py>`__ for observations that have shape of length larger than 2 (for example, (84 x 84 x 3)), and a `fully connected network <https://github.com/ray-project/ray/blob/master/rllib/models/tf/fcnet_v1.py>`__ for everything else. These models can be configured via the ``model`` config key, documented in the model `catalog <https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py>`__. Note that you'll probably have to configure ``conv_filters`` if your environment observations have custom sizes, e.g., ``"model": {"dim": 42, "conv_filters": [[16, [4, 4], 2], [32, [4, 4], 2], [512, [11, 11], 1]]}`` for 42x42 observations.
RLlib tries to pick one of its built-in preprocessor based on the environment's observation space.
Discrete observations are one-hot encoded, Atari observations downscaled, and Tuple and Dict observations flattened (these are unflattened and accessible via the ``input_dict`` parameter in custom models).
Note that for Atari, RLlib defaults to using the `DeepMind preprocessors <https://github.com/ray-project/ray/blob/master/rllib/env/atari_wrappers.py>`__, which are also used by the OpenAI baselines library.
In addition, if you set ``"model": {"use_lstm": true}``, then the model output will be further processed by a `LSTM cell <https://github.com/ray-project/ray/blob/master/rllib/models/tf/lstm_v1.py>`__. More generally, RLlib supports the use of recurrent models for its policy gradient algorithms (A3C, PPO, PG, IMPALA), and RNN support is built into its policy evaluation utilities.
Built-in Models
~~~~~~~~~~~~~~~
For preprocessors, RLlib tries to pick one of its built-in preprocessor based on the environment's observation space. Discrete observations are one-hot encoded, Atari observations downscaled, and Tuple and Dict observations flattened (these are unflattened and accessible via the ``input_dict`` parameter in custom models). Note that for Atari, RLlib defaults to using the `DeepMind preprocessors <https://github.com/ray-project/ray/blob/master/rllib/env/atari_wrappers.py>`__, which are also used by the OpenAI baselines library.
After preprocessing raw environment outputs, these preprocessed observations are then fed through a policy's model.
RLlib picks default models based on a simple heuristic: A vision network (`TF <https://github.com/ray-project/ray/blob/master/rllib/models/tf/visionnet.py>`__ or `Torch <https://github.com/ray-project/ray/blob/master/rllib/models/torch/visionnet.py>`__)
for observations that have a shape of length larger than 2 (for example, (84 x 84 x 3)),
and a fully connected network (`TF <https://github.com/ray-project/ray/blob/master/rllib/models/tf/fcnet.py>`__ or `Torch <https://github.com/ray-project/ray/blob/master/rllib/models/torch/fcnet.py>`__)
for everything else. These models can be configured via the ``model`` config key, documented in the model `catalog <https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py>`__.
Note that for the vision network case, you'll probably have to configure ``conv_filters`` if your environment observations
have custom sizes, e.g., ``"model": {"dim": 42, "conv_filters": [[16, [4, 4], 2], [32, [4, 4], 2], [512, [11, 11], 1]]}`` for 42x42 observations.
Thereby, always make sure that the last Conv2D output has an output shape of `[B, 1, 1, X]` (`[B, X, 1, 1]` for Torch), where B=batch and
X=last Conv2D layer's number of filters, so that RLlib can flatten it. An informative error will be thrown if this is not the case.
In addition, if you set ``"model": {"use_lstm": true}``, the model output will be further processed by an LSTM cell (`TF <https://github.com/ray-project/ray/blob/master/rllib/models/tf/recurrent_net.py>`__ or `Torch <https://github.com/ray-project/ray/blob/master/rllib/models/torch/recurrent_net.py>`__).
More generally, RLlib supports the use of recurrent models for its policy gradient algorithms (A3C, PPO, PG, IMPALA), and RNN support is built into its policy evaluation utilities.
For custom RNN/LSTM setups, see the `Recurrent Models`_. section below.
Built-in Model Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~
@ -35,9 +50,11 @@ TensorFlow Models
.. note::
TFModelV2 replaces the previous ``rllib.models.Model`` class, which did not support Keras-style reuse of variables. The ``rllib.models.Model`` class is deprecated and should not be used.
TFModelV2 replaces the previous ``rllib.models.Model`` class, which did not support Keras-style reuse of variables. The ``rllib.models.Model`` class (aka "ModelV1") is deprecated and should no longer be used.
Custom TF models should subclass `TFModelV2 <https://github.com/ray-project/ray/blob/master/rllib/models/tf/tf_modelv2.py>`__ to implement the ``__init__()`` and ``forward()`` methods. Forward takes in a dict of tensor inputs (the observation ``obs``, ``prev_action``, and ``prev_reward``, ``is_training``), optional RNN state, and returns the model output of size ``num_outputs`` and the new state. You can also override extra methods of the model such as ``value_function`` to implement a custom value branch. Additional supervised / self-supervised losses can be added via the ``custom_loss`` method:
Custom TF models should subclass `TFModelV2 <https://github.com/ray-project/ray/blob/master/rllib/models/tf/tf_modelv2.py>`__ to implement the ``__init__()`` and ``forward()`` methods. Forward takes in a dict of tensor inputs (the observation ``obs``, ``prev_action``, and ``prev_reward``, ``is_training``), optional RNN state,
and returns the model output of size ``num_outputs`` and the new state. You can also override extra methods of the model such as ``value_function`` to implement a custom value branch.
Additional supervised / self-supervised losses can be added via the ``custom_loss`` method:
.. autoclass:: ray.rllib.models.tf.tf_modelv2.TFModelV2
@ -76,30 +93,14 @@ Once implemented, the model can then be registered and used in place of a built-
},
})
For a full example of a custom model in code, see the `keras model example <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_keras_model.py>`__. You can also reference the `unit tests <https://github.com/ray-project/ray/blob/master/rllib/tests/test_nested_spaces.py>`__ for Tuple and Dict spaces, which show how to access nested observation fields.
Recurrent Models
~~~~~~~~~~~~~~~~
Instead of using the ``use_lstm: True`` option, it can be preferable use a custom recurrent model. This provides more control over postprocessing of the LSTM output and can also allow the use of multiple LSTM cells to process different portions of the input. For an RNN model it is preferred to subclass ``RecurrentNetwork`` to implement ``__init__()``, ``get_initial_state()``, and ``forward_rnn()``. You can check out the `custom_rnn_model.py <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_rnn_model.py>`__ model as an example to implement your own model:
.. autoclass:: ray.rllib.models.tf.recurrent_net.RecurrentNetwork
.. automethod:: __init__
.. automethod:: forward_rnn
.. automethod:: get_initial_state
Batch Normalization
~~~~~~~~~~~~~~~~~~~
You can use ``tf.layers.batch_normalization(x, training=input_dict["is_training"])`` to add batch norm layers to your custom model: `code example <https://github.com/ray-project/ray/blob/master/rllib/examples/batch_norm_model.py>`__. RLlib will automatically run the update ops for the batch norm layers during optimization (see `tf_policy.py <https://github.com/ray-project/ray/blob/master/rllib/policy/tf_policy.py>`__ and `multi_gpu_impl.py <https://github.com/ray-project/ray/blob/master/rllib/execution/multi_gpu_impl.py>`__ for the exact handling of these updates).
In case RLlib does not properly detect the update ops for your custom model, you can override the ``update_ops()`` method to return the list of ops to run for updates.
See the `keras model example <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_keras_model.py>`__ for a full example of a TF custom model.
You can also reference the `unit tests <https://github.com/ray-project/ray/blob/master/rllib/tests/test_nested_observation_spaces.py>`__ for Tuple and Dict spaces, which show how to access nested observation fields.
PyTorch Models
--------------
Similarly, you can create and register custom PyTorch models for use with PyTorch-based algorithms (e.g., A2C, PG, QMIX). See these examples of `fully connected <https://github.com/ray-project/ray/blob/master/rllib/models/torch/fcnet.py>`__, `convolutional <https://github.com/ray-project/ray/blob/master/rllib/models/torch/visionnet.py>`__, and `recurrent <https://github.com/ray-project/ray/blob/master/rllib/agents/qmix/model.py>`__ torch models.
Similarly, you can create and register custom PyTorch models.
See these examples of `fully connected <https://github.com/ray-project/ray/blob/master/rllib/models/torch/fcnet.py>`__, `convolutional <https://github.com/ray-project/ray/blob/master/rllib/models/torch/visionnet.py>`__, and `recurrent <https://github.com/ray-project/ray/blob/master/rllib/models/torch/recurrent_net.py>`__ torch models.
.. autoclass:: ray.rllib.models.torch.torch_modelv2.TorchModelV2
@ -117,7 +118,7 @@ Once implemented, the model can then be registered and used in place of a built-
import torch.nn as nn
import ray
from ray.rllib.agents import a3c
from ray.rllib.agents import ppo
from ray.rllib.models import ModelCatalog
from ray.rllib.models.torch.torch_modelv2 import TorchModelV2
@ -129,7 +130,7 @@ Once implemented, the model can then be registered and used in place of a built-
ModelCatalog.register_custom_model("my_model", CustomTorchModel)
ray.init()
trainer = a3c.A2CTrainer(env="CartPole-v0", config={
trainer = ppo.PPOTrainer(env="CartPole-v0", config={
"framework": "torch",
"model": {
"custom_model": "my_model",
@ -138,12 +139,37 @@ Once implemented, the model can then be registered and used in place of a built-
},
})
See the `torch model examples <https://github.com/ray-project/ray/blob/master/rllib/examples/models/>`__ for various examples on how to build a custom Torch model (including recurrent ones).
You can also reference the `unit tests <https://github.com/ray-project/ray/blob/master/rllib/tests/test_nested_observation_spaces.py>`__ for Tuple and Dict spaces, which show how to access nested observation fields.
Recurrent Models
~~~~~~~~~~~~~~~~
Instead of using the ``use_lstm: True`` option, it can be preferable to use a custom recurrent model.
This provides more control over postprocessing of the LSTM output and can also allow the use of multiple LSTM cells to process different portions of the input.
For an RNN model it is preferred to subclass ``RecurrentNetwork`` (either the TF or Torch versions) and to implement ``__init__()``, ``get_initial_state()``, and ``forward_rnn()``.
You can check out the `rnn_model.py <https://github.com/ray-project/ray/blob/master/rllib/examples/models/rnn_model.py>`__ models as examples to implement your own (either TF or Torch):
.. autoclass:: ray.rllib.models.tf.recurrent_net.RecurrentNetwork
.. automethod:: __init__
.. automethod:: forward_rnn
.. automethod:: get_initial_state
Batch Normalization
~~~~~~~~~~~~~~~~~~~
You can use ``tf.layers.batch_normalization(x, training=input_dict["is_training"])`` to add batch norm layers to your custom model: `code example <https://github.com/ray-project/ray/blob/master/rllib/examples/batch_norm_model.py>`__. RLlib will automatically run the update ops for the batch norm layers during optimization (see `tf_policy.py <https://github.com/ray-project/ray/blob/master/rllib/policy/tf_policy.py>`__ and `multi_gpu_impl.py <https://github.com/ray-project/ray/blob/master/rllib/execution/multi_gpu_impl.py>`__ for the exact handling of these updates).
In case RLlib does not properly detect the update ops for your custom model, you can override the ``update_ops()`` method to return the list of ops to run for updates.
Custom Preprocessors
--------------------
.. warning::
Custom preprocessors are deprecated, since they sometimes conflict with the built-in preprocessors for handling complex observation spaces. Please use `wrapper classes <https://github.com/openai/gym/tree/master/gym/wrappers>`__ around your environment instead of preprocessors.
Custom preprocessors are deprecated, since they sometimes conflict with the built-in preprocessors for handling complex observation spaces.
Please use `wrapper classes <https://github.com/openai/gym/tree/master/gym/wrappers>`__ around your environment instead of preprocessors.
Custom preprocessors should subclass the RLlib `preprocessor class <https://github.com/ray-project/ray/blob/master/rllib/models/preprocessors.py>`__ and be registered in the model catalog:
@ -172,6 +198,70 @@ Custom preprocessors should subclass the RLlib `preprocessor class <https://gith
},
})
Custom Models on Top of Built-In Ones
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A common use case is to construct a custom model on top of one of RLlib's built-in ones (e.g. a special output head on top of an fcnet, or an action + observation concat operation at the beginning or
after a conv2d stack).
Here is an example of how to construct a dueling layer head (for DQN) on top of an RLlib default model (either a Conv2D or an FCNet):
.. code-block:: python
class DuelingQModel(TFModelV2): # or: TorchModelV2
"""A simple, hard-coded dueling head model."""
def __init__(obs_space, action_space, num_outputs, model_config, name):
# Pass num_outputs=None into super constructor (so that no action/
# logits output layer is built).
# Alternatively, you can pass in num_outputs=[last layer size of
# config[model][fcnet_hiddens]] AND set no_last_linear=True, but
# this seems more tedious as you will have to explain users of this
# class that num_outputs is NOT the size of your Q-output layer.
super(DuelingQModel, self).__init__(
obs_space, action_space, None, model_config, name)
# Now: self.num_outputs contains the last layer's size, which
# we can use to construct the dueling head.
# Construct advantage head ...
self.A = tf.keras.layers.Dense(num_outputs)
# torch:
# self.A = SlimFC(
# in_size=self.num_outputs, out_size=num_outputs)
# ... and value head.
self.V = tf.keras.layers.Dense(1)
# torch:
# self.V = SlimFC(in_size=self.num_outputs, out_size=1)
def get_q_values(self, inputs):
# Calculate q-values following dueling logic:
v = self.V(inputs) # value
a = self.A(inputs) # advantages (per action)
advantages_mean = tf.reduce_mean(a, 1)
advantages_centered = a - tf.expand_dims(advantages_mean, 1)
return v + advantages_centered # q-values
In order to construct an instance of the above model, you can still use the `catalog <https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py>`__
`get_model_v2` convenience method:
.. code-block:: python
dueling_model = ModelCatalog.get_model_v2(
obs_space=[obs_space],
action_space=[action_space],
num_outputs=[num q-value (per action) outs],
model_config=config["model"],
framework="tf", # or: "torch"
model_interface=DuelingQModel,
name="dueling_q_model"
)
Now, with the model object, you can get the underlying intermediate output (before the dueling head)
by calling `dueling_model` directly (`out = dueling_model([input_dict])`), and then passing `out` into
your custom `get_q_values` method: `q_values = dueling_model.get_q_values(out)`.
Custom Action Distributions
---------------------------
@ -233,7 +323,7 @@ For further information about complex observation spaces, see:
* A custom environment and model that uses `repeated struct fields <https://github.com/ray-project/ray/blob/master/rllib/examples/complex_struct_space.py>`__.
* The pydoc of the `Repeated space <https://github.com/ray-project/ray/blob/master/rllib/utils/spaces/repeated.py>`__.
* The pydoc of the batched `repeated values tensor <https://github.com/ray-project/ray/blob/master/rllib/models/repeated_values.py>`__.
* The `unit tests <https://github.com/ray-project/ray/blob/master/rllib/tests/test_nested_spaces.py>`__ for Tuple and Dict spaces.
* The `unit tests <https://github.com/ray-project/ray/blob/master/rllib/tests/test_nested_observation_spaces.py>`__ for Tuple and Dict spaces.
Variable-length / Parametric Action Spaces
------------------------------------------

View file

@ -389,13 +389,13 @@ py_test(
srcs = ["agents/a3c/tests/test_a3c.py"]
)
# APEXTrainer (DQN)
py_test(
name = "test_apex_dqn",
tags = ["agents_dir"],
size = "large",
srcs = ["agents/dqn/tests/test_apex_dqn.py"]
)
## APEXTrainer (DQN)
#py_test(
# name = "test_apex_dqn",
# tags = ["agents_dir"],
# size = "large",
# srcs = ["agents/dqn/tests/test_apex_dqn.py"]
#)
# APEXDDPGTrainer
py_test(
@ -479,7 +479,7 @@ py_test(
py_test(
name = "test_maml",
tags = ["agents_dir"],
size = "small",
size = "medium",
srcs = ["agents/maml/tests/test_maml.py"]
)
@ -1231,12 +1231,12 @@ py_test(
srcs = ["tests/test_filters.py"]
)
py_test(
name = "tests/test_ignore_worker_failure",
tags = ["tests_dir", "tests_dir_I"],
size = "large",
srcs = ["tests/test_ignore_worker_failure.py"]
)
#py_test(
# name = "tests/test_ignore_worker_failure",
# tags = ["tests_dir", "tests_dir_I"],
# size = "large",
# srcs = ["tests/test_ignore_worker_failure.py"]
#)
py_test(
name = "tests/test_io",
@ -1342,14 +1342,14 @@ py_test(
args = ["TestSupportedMultiAgentPG"]
)
py_test(
name = "tests/test_supported_multi_agent_off_policy",
main = "tests/test_supported_multi_agent.py",
tags = ["tests_dir", "tests_dir_S"],
size = "medium",
srcs = ["tests/test_supported_multi_agent.py"],
args = ["TestSupportedMultiAgentOffPolicy"]
)
#py_test(
# name = "tests/test_supported_multi_agent_off_policy",
# main = "tests/test_supported_multi_agent.py",
# tags = ["tests_dir", "tests_dir_S"],
# size = "medium",
# srcs = ["tests/test_supported_multi_agent.py"],
# args = ["TestSupportedMultiAgentOffPolicy"]
#)
py_test(
name = "tests/test_supported_spaces_pg",

View file

@ -1,69 +0,0 @@
from ray.rllib.models.tf.tf_modelv2 import TFModelV2
from ray.rllib.utils.framework import try_import_tf
tf1, tf, tfv = try_import_tf()
class SimpleQModel(TFModelV2):
"""Extension of standard TFModel to provide Q values.
Data flow:
obs -> forward() -> model_out
model_out -> get_q_values() -> Q(s, a)
Note that this class by itself is not a valid model unless you
implement forward() in a subclass."""
def __init__(self,
obs_space,
action_space,
num_outputs,
model_config,
name,
q_hiddens=(256, )):
"""Initialize variables of this model.
Extra model kwargs:
q_hiddens (list): defines size of hidden layers for the q head.
These will be used to postprocess the model output for the
purposes of computing Q values.
Note that the core layers for forward() are not defined here, this
only defines the layers for the Q head. Those layers for forward()
should be defined in subclasses of SimpleQModel.
"""
super(SimpleQModel, self).__init__(obs_space, action_space,
num_outputs, model_config, name)
# setup the Q head output (i.e., model for get_q_values)
self.model_out = tf.keras.layers.Input(
shape=(num_outputs, ), name="model_out")
if q_hiddens:
last_layer = self.model_out
for i, n in enumerate(q_hiddens):
last_layer = tf.keras.layers.Dense(
n, name="q_hidden_{}".format(i),
activation=tf.nn.relu)(last_layer)
q_out = tf.keras.layers.Dense(
action_space.n, activation=None, name="q_out")(last_layer)
else:
q_out = self.model_out
self.q_value_head = tf.keras.Model(self.model_out, q_out)
self.register_variables(self.q_value_head.variables)
def get_q_values(self, model_out):
"""Returns Q(s, a) given a feature tensor for the state.
Override this in your custom model to customize the Q output head.
Arguments:
model_out (Tensor): embedding from the model layers
Returns:
action scores Q(s, a) for each action, shape [None, action_space.n]
"""
return self.q_value_head(model_out)