From 1dbe7fc26a06c614154e9946d5bbd05d374eea4a Mon Sep 17 00:00:00 2001 From: gjoliver Date: Tue, 17 Aug 2021 02:46:10 -0700 Subject: [PATCH] [RLlib] Config dict should use true instad of True in docs/examples. (#17889) --- doc/source/rllib-concepts.rst | 4 ++-- doc/source/rllib-training.rst | 4 ++-- doc/source/rllib.rst | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/source/rllib-concepts.rst b/doc/source/rllib-concepts.rst index f629d4ad1..c35190a25 100644 --- a/doc/source/rllib-concepts.rst +++ b/doc/source/rllib-concepts.rst @@ -423,8 +423,8 @@ Building Policies in TensorFlow Eager Policies built with ``build_tf_policy`` (most of the reference algorithms are) can be run in eager mode by setting -the ``"framework": "tf2"`` / ``"eager_tracing": True`` config options or -using ``rllib train '{"framework": "tf2", "eager_tracing": True}'``. +the ``"framework": "tf2"`` / ``"eager_tracing": true`` config options or +using ``rllib train '{"framework": "tf2", "eager_tracing": true}'``. This will tell RLlib to execute the model forward pass, action distribution, loss, and stats functions in eager mode. diff --git a/doc/source/rllib-training.rst b/doc/source/rllib-training.rst index 060e5efab..efdc2b7d1 100644 --- a/doc/source/rllib-training.rst +++ b/doc/source/rllib-training.rst @@ -19,7 +19,7 @@ You can train a simple DQN trainer with the following command: .. code-block:: bash - rllib train --run DQN --env CartPole-v0 # --config '{"framework": "tf2", "eager_tracing": True}' for eager execution + rllib train --run DQN --env CartPole-v0 # --config '{"framework": "tf2", "eager_tracing": true}' for eager execution By default, the results will be logged to a subdirectory of ``~/ray_results``. This subdirectory will contain a file ``params.json`` which contains the @@ -947,7 +947,7 @@ Eager Mode Policies built with ``build_tf_policy`` (most of the reference algorithms are) can be run in eager mode by setting the -``"framework": "[tf2|tfe]"`` / ``"eager_tracing": True`` config options or using +``"framework": "[tf2|tfe]"`` / ``"eager_tracing": true`` config options or using ``rllib train --config '{"framework": "tf2"}' [--trace]``. This will tell RLlib to execute the model forward pass, action distribution, loss, and stats functions in eager mode. diff --git a/doc/source/rllib.rst b/doc/source/rllib.rst index 38298ad79..ea2ba6f62 100644 --- a/doc/source/rllib.rst +++ b/doc/source/rllib.rst @@ -32,7 +32,7 @@ Then, you can try out training in the following equivalent ways: .. code-block:: bash rllib train --run=PPO --env=CartPole-v0 # -v [-vv] for verbose, - # --config='{"framework": "tf2", "eager_tracing": True}' for eager, + # --config='{"framework": "tf2", "eager_tracing": true}' for eager, # --torch to use PyTorch OR --config='{"framework": "torch"}' .. code-block:: python