mirror of
https://github.com/vale981/ray
synced 2025-03-06 10:31:39 -05:00
[RLlib; Docs] Updated RLlib training example page (#19932)
This commit is contained in:
parent
e6ae08f416
commit
f359b21541
1 changed files with 2 additions and 1 deletions
|
@ -15,10 +15,11 @@ be trained, checkpointed, or an action computed. In multi-agent training, the tr
|
|||
|
||||
.. image:: rllib-api.svg
|
||||
|
||||
You can train a simple DQN trainer with the following command:
|
||||
You can train a simple DQN trainer with the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install "ray[rllib]" tensorflow
|
||||
rllib train --run DQN --env CartPole-v0 # --config '{"framework": "tf2", "eager_tracing": true}' for eager execution
|
||||
|
||||
By default, the results will be logged to a subdirectory of ``~/ray_results``.
|
||||
|
|
Loading…
Add table
Reference in a new issue