[RLlib] Issue 15724: Breaking example script in docs due to outdated eager config flag (use framework='tf2|tfe' instead). (#15736)

This commit is contained in:
Sven Mika 2021-05-18 11:34:46 +02:00 committed by GitHub
parent 4c8813f2e8
commit 4e9555cad3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 7 additions and 7 deletions

View file

@ -423,8 +423,8 @@ Building Policies in TensorFlow Eager
Policies built with ``build_tf_policy`` (most of the reference algorithms are)
can be run in eager mode by setting
the ``"eager": True`` / ``"eager_tracing": True`` config options or
using ``rllib train --eager [--trace]``.
the ``"framework": "tf2"`` / ``"eager_tracing": True`` config options or
using ``rllib train '{"framework": "tf2", "eager_tracing": True}'``.
This will tell RLlib to execute the model forward pass, action distribution,
loss, and stats functions in eager mode.

View file

@ -222,7 +222,7 @@ references in the cluster.
TensorFlow 2.0
~~~~~~~~~~~~~~
RLlib currently runs in ``tf.compat.v1`` mode. This means eager execution is disabled by default, and RLlib imports TF with ``import tensorflow.compat.v1 as tf; tf.disable_v2_behaviour()``. Eager execution can be enabled manually by calling ``tf.enable_eager_execution()`` or setting the ``"eager": True`` trainer config.
RLlib currently runs in ``tf.compat.v1`` mode. This means eager execution is disabled by default, and RLlib imports TF with ``import tensorflow.compat.v1 as tf; tf.disable_v2_behaviour()``. Eager execution can be enabled manually by calling ``tf.enable_eager_execution()`` or setting the ``"framework": "tf2"`` trainer config.
.. |tensorflow| image:: tensorflow.png
:class: inline-figure

View file

@ -14,7 +14,7 @@ You can train a simple DQN trainer with the following command:
.. code-block:: bash
rllib train --run DQN --env CartPole-v0 # --eager [--trace] for eager execution
rllib train --run DQN --env CartPole-v0 # --config '{"framework": "tf2", "eager_tracing": True}' for eager execution
By default, the results will be logged to a subdirectory of ``~/ray_results``.
This subdirectory will contain a file ``params.json`` which contains the
@ -906,7 +906,7 @@ Eager Mode
Policies built with ``build_tf_policy`` (most of the reference algorithms are)
can be run in eager mode by setting the
``"framework": "[tf2|tfe]"`` / ``"eager_tracing": True`` config options or using
``rllib train --eager [--trace]``.
``rllib train --config '{"framework": "tf2"}' [--trace]``.
This will tell RLlib to execute the model forward pass, action distribution,
loss, and stats functions in eager mode.

View file

@ -28,8 +28,8 @@ Then, you can try out training in the following equivalent ways:
.. code-block:: bash
rllib train --run=PPO --env=CartPole-v0 # -v [-vv] for verbose,
# --eager [--trace] for eager execution,
# --torch to use PyTorch
# --config='{"framework": "tf2", "eager_tracing": True}' for eager,
# --torch to use PyTorch OR --config='{"framework": "torch"}'
.. code-block:: python