[docs] Move all /latest links to /master (#11897)

* use master link

* remae

* revert non-ray

* more

* mre
This commit is contained in:
Eric Liang 2020-11-10 10:53:28 -08:00 committed by GitHub
parent 543f7809a6
commit 9b8218aabd
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
42 changed files with 69 additions and 73 deletions

View file

@ -19,4 +19,4 @@ Please provide a script that can be run to reproduce the issue. The script shoul
If we cannot run your script, we cannot fix your issue.
- [ ] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/master/installation.html).

View file

@ -1,7 +1,7 @@
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
.. image:: https://readthedocs.org/projects/ray/badge/?version=latest
:target: http://docs.ray.io/en/latest/?badge=latest
.. image:: https://readthedocs.org/projects/ray/badge/?version=master
:target: http://docs.ray.io/en/master/?badge=master
.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
:target: https://forms.gle/9TSdDYUgxYs8SA9e8
@ -15,7 +15,7 @@ Ray is packaged with the following libraries for accelerating machine learning w
- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `RaySGD <https://docs.ray.io/en/latest/raysgd/raysgd.html>`__: Distributed Training Wrappers
- `RaySGD <https://docs.ray.io/en/master/raysgd/raysgd.html>`__: Distributed Training Wrappers
- `Ray Serve`_: Scalable and Programmable Serving
There are also many `community integrations <https://docs.ray.io/en/master/ray-libraries.html>`_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here <https://docs.ray.io/en/master/ray-libraries.html>`_.
@ -78,7 +78,7 @@ Ray programs can run on a single machine, and can also seamlessly scale to large
``ray submit [CLUSTER.YAML] example.py --start``
Read more about `launching clusters <https://docs.ray.io/en/latest/cluster/index.html>`_.
Read more about `launching clusters <https://docs.ray.io/en/master/cluster/index.html>`_.
Tune Quick Start
----------------
@ -140,10 +140,10 @@ If TensorBoard is installed, automatically visualize all trial results:
tensorboard --logdir ~/ray_results
.. _`Tune`: https://docs.ray.io/en/latest/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/latest/tune-schedulers.html#population-based-training-pbt
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/latest/tune-schedulers.html#median-stopping-rule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/latest/tune-schedulers.html#asynchronous-hyperband
.. _`Tune`: https://docs.ray.io/en/master/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/master/tune-schedulers.html#population-based-training-pbt
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/master/tune-schedulers.html#median-stopping-rule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/master/tune-schedulers.html#asynchronous-hyperband
RLlib Quick Start
-----------------
@ -189,7 +189,7 @@ RLlib Quick Start
"num_workers": 4,
"env_config": {"corridor_length": 5}})
.. _`RLlib`: https://docs.ray.io/en/latest/rllib.html
.. _`RLlib`: https://docs.ray.io/en/master/rllib.html
Ray Serve Quick Start
@ -264,7 +264,7 @@ This example runs serves a scikit-learn gradient boosting classifier.
# }
.. _`Ray Serve`: https://docs.ray.io/en/latest/serve/index.html
.. _`Ray Serve`: https://docs.ray.io/en/master/serve/index.html
More Information
----------------
@ -282,7 +282,7 @@ More Information
- `Ray HotOS paper`_
- `Blog (old)`_
.. _`Documentation`: http://docs.ray.io/en/latest/index.html
.. _`Documentation`: http://docs.ray.io/en/master/index.html
.. _`Tutorial`: https://github.com/ray-project/tutorial
.. _`Blog (old)`: https://ray-project.github.io/
.. _`Blog`: https://medium.com/distributed-computing-with-ray

View file

@ -13,7 +13,7 @@ import { sum } from "../../../common/util";
import ActorStateRepr from "./ActorStateRepr";
const memoryDebuggingDocLink =
"https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory";
"https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory";
type ActorDatum = {
label: string;

View file

@ -143,7 +143,7 @@ class Tune extends React.Component<
You can use this tab to monitor Tune jobs, their statuses,
hyperparameters, and more. For more information, read the
documentation{" "}
<a href="https://docs.ray.io/en/latest/ray-dashboard.html#tune">
<a href="https://docs.ray.io/en/master/ray-dashboard.html#tune">
here
</a>
.

View file

@ -4,7 +4,7 @@ name: ray-example-cython
description: "Example of how to use Cython with ray"
tags: ["ray-example", "cython"]
documentation: https://docs.ray.io/en/latest/advanced.html#cython-code-in-ray
documentation: https://docs.ray.io/en/master/advanced.html#cython-code-in-ray
cluster:
config: ray-project/cluster.yaml

View file

@ -4,7 +4,7 @@ name: ray-example-lbfgs
description: "Parallelizing the L-BFGS algorithm in ray"
tags: ["ray-example", "optimization", "lbfgs"]
documentation: https://docs.ray.io/en/latest/auto_examples/plot_lbfgs.html
documentation: https://docs.ray.io/en/master/auto_examples/plot_lbfgs.html
cluster:
config: ray-project/cluster.yaml

View file

@ -4,7 +4,7 @@ name: ray-example-newsreader
description: "A simple news reader example that uses ray actors to serve requests"
tags: ["ray-example", "flask", "rss", "newsreader"]
documentation: https://docs.ray.io/en/latest/auto_examples/plot_newsreader.html
documentation: https://docs.ray.io/en/master/auto_examples/plot_newsreader.html
cluster:
config: ray-project/cluster.yaml

View file

@ -90,7 +90,7 @@ Machine Learning Examples
Reinforcement Learning Examples
-------------------------------
These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib <http://docs.ray.io/en/latest/rllib.html>`__.
These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib <http://docs.ray.io/en/master/rllib.html>`__.
.. raw:: html

View file

@ -13,7 +13,7 @@ View the `code for this example`_.
.. note::
For an overview of Ray's reinforcement learning library, see `RLlib <http://docs.ray.io/en/latest/rllib.html>`__.
For an overview of Ray's reinforcement learning library, see `RLlib <http://docs.ray.io/en/master/rllib.html>`__.
To run the application, first install **ray** and then some dependencies:

View file

@ -16,7 +16,7 @@ their results to be ready.
hyperparameter tuning, use `Tune`_, a scalable hyperparameter
tuning library built using Ray's Actor API.
.. _`Tune`: https://docs.ray.io/en/latest/tune.html
.. _`Tune`: https://docs.ray.io/en/master/tune.html
Setup: Dependencies
-------------------

View file

@ -87,7 +87,7 @@ the top 10 words in these articles together with their word count:
Note that this examples uses `distributed actor handles`_, which are still
considered experimental.
.. _`distributed actor handles`: http://docs.ray.io/en/latest/actors.html
.. _`distributed actor handles`: http://docs.ray.io/en/master/actors.html
There is a ``Mapper`` actor, which has a method ``get_range`` used to retrieve
word counts for words in a certain range:

View file

@ -8,7 +8,7 @@ date: 2017-05-20 14:00:00
This post announces Ray, a framework for efficiently running Python code on
clusters and large multi-core machines. The project is open source.
You can check out [the code](https://github.com/ray-project/ray) and
[the documentation](http://docs.ray.io/en/latest/?badge=latest).
[the documentation](http://docs.ray.io/en/master/?badge=latest).
Many AI algorithms are computationally intensive and exhibit complex
communication patterns. As a result, many researchers spend most of their

View file

@ -134,12 +134,12 @@ state of the actor. We are working on improving the speed of recovery by
enabling actor state to be restored from checkpoints. See [an overview of fault
tolerance in Ray][4].
[1]: http://docs.ray.io/en/latest/plasma-object-store.html
[2]: http://docs.ray.io/en/latest/webui.html
[3]: http://docs.ray.io/en/latest/rllib.html
[4]: http://docs.ray.io/en/latest/fault-tolerance.html
[1]: http://docs.ray.io/en/master/plasma-object-store.html
[2]: http://docs.ray.io/en/master/webui.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: http://docs.ray.io/en/master/fault-tolerance.html
[5]: https://github.com/apache/arrow
[6]: http://docs.ray.io/en/latest/example-a3c.html
[6]: http://docs.ray.io/en/master/example-a3c.html
[7]: https://github.com/openai/baselines
[8]: https://github.com/ray-project/ray/blob/b020e6bf1fb00d0745371d8674146d4a5b75d9f0/python/ray/rllib/test/tuned_examples.sh#L11
[9]: https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization

View file

@ -271,7 +271,7 @@ for i in range(len(test_objects)):
plot(*benchmark_object(test_objects[i]), titles[i], i)
```
[1]: http://docs.ray.io/en/latest/index.html
[1]: http://docs.ray.io/en/master/index.html
[2]: https://arrow.apache.org/
[3]: https://en.wikipedia.org/wiki/Serialization
[4]: https://github.com/cloudpipe/cloudpickle/

View file

@ -134,14 +134,14 @@ This feature is still considered experimental, but we've already found
distributed actor handles useful for implementing [**parameter server**][10] and
[**streaming MapReduce**][11] applications.
[1]: http://docs.ray.io/en/latest/actors.html#passing-around-actor-handles-experimental
[2]: http://docs.ray.io/en/latest/tune.html
[3]: http://docs.ray.io/en/latest/rllib.html
[1]: http://docs.ray.io/en/master/actors.html#passing-around-actor-handles-experimental
[2]: http://docs.ray.io/en/master/tune.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: https://research.google.com/pubs/pub46180.html
[5]: https://arxiv.org/abs/1603.06560
[6]: https://www.tensorflow.org/get_started/summaries_and_tensorboard
[7]: https://media.readthedocs.org/pdf/rllab/latest/rllab.pdf
[8]: https://en.wikipedia.org/wiki/Parallel_coordinates
[9]: https://github.com/ray-project/ray/tree/master/python/ray/tune
[10]: http://docs.ray.io/en/latest/example-parameter-server.html
[11]: http://docs.ray.io/en/latest/example-streaming.html
[10]: http://docs.ray.io/en/master/example-parameter-server.html
[11]: http://docs.ray.io/en/master/example-streaming.html

View file

@ -78,10 +78,10 @@ Training][9].
[1]: https://github.com/ray-project/ray
[2]: https://rise.cs.berkeley.edu/blog/pandas-on-ray/
[3]: http://docs.ray.io/en/latest/rllib.html
[4]: http://docs.ray.io/en/latest/tune.html
[3]: http://docs.ray.io/en/master/rllib.html
[4]: http://docs.ray.io/en/master/tune.html
[5]: https://rise.cs.berkeley.edu/blog/distributed-policy-optimizers-for-scalable-and-reproducible-deep-rl/
[6]: http://docs.ray.io/en/latest/resources.html
[6]: http://docs.ray.io/en/master/resources.html
[7]: https://pandas.pydata.org/
[8]: https://arxiv.org/abs/1803.00933
[9]: http://docs.ray.io/en/latest/pbt.html
[9]: http://docs.ray.io/en/master/pbt.html

View file

@ -76,8 +76,8 @@ Ray now supports Java thanks to contributions from [Ant Financial][4]:
[1]: https://github.com/ray-project/ray
[2]: http://docs.ray.io/en/latest/rllib.html
[3]: http://docs.ray.io/en/latest/tune.html
[2]: http://docs.ray.io/en/master/rllib.html
[3]: http://docs.ray.io/en/master/tune.html
[4]: https://www.antfin.com/
[5]: https://github.com/modin-project/modin
[6]: http://docs.ray.io/en/latest/autoscaling.html
[6]: http://docs.ray.io/en/master/autoscaling.html

View file

@ -321,12 +321,12 @@ Questions should be directed to *ray-dev@googlegroups.com*.
[1]: https://github.com/ray-project/ray
[2]: http://docs.ray.io/en/latest/resources.html
[2]: http://docs.ray.io/en/master/resources.html
[3]: http://www.sysml.cc/doc/206.pdf
[4]: http://docs.ray.io/en/latest/rllib.html
[5]: http://docs.ray.io/en/latest/tune.html
[6]: http://docs.ray.io/en/latest
[7]: http://docs.ray.io/en/latest/api.html
[4]: http://docs.ray.io/en/master/rllib.html
[5]: http://docs.ray.io/en/master/tune.html
[6]: http://docs.ray.io/en/master
[7]: http://docs.ray.io/en/master/api.html
[8]: https://github.com/modin-project/modin
[9]: https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html
[10]: https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html

View file

@ -25,7 +25,7 @@ layout: default
</p>
<ul>
<li>Ray Project <a href="https://ray.io">web site</a></li>
<li><a href="https://docs.ray.io/en/latest/">Documentation</a></li>
<li><a href="https://docs.ray.io/en/master/">Documentation</a></li>
<li><a href="https://github.com/ray-project/">GitHub project</a></li>
<li><a href="https://github.com/ray-project/tutorial">Tutorials</a></li>
</ul>

View file

@ -33,6 +33,6 @@ layout: default
</ul>
<p>
To get started, visit the Ray Project <a href="https://ray.io">web site</a>, <a href="https://docs.ray.io/en/latest/">documentation</a>, <a href="https://github.com/ray-project/">GitHub project</a>, or <a href="https://github.com/ray-project/tutorial">Tutorials</a>.
To get started, visit the Ray Project <a href="https://ray.io">web site</a>, <a href="https://docs.ray.io/en/master/">documentation</a>, <a href="https://github.com/ray-project/">GitHub project</a>, or <a href="https://github.com/ray-project/tutorial">Tutorials</a>.
</p>
</div>

View file

@ -302,4 +302,4 @@ Now that you have a working understanding of the cluster launcher, check out:
Questions or Issues?
--------------------
.. include:: /_help.rst
.. include:: /_help.rst

View file

@ -76,7 +76,7 @@ on each machine. To install Ray, follow the `installation instructions`_.
To configure the Ray cluster to run Java code, you need to add the ``--code-search-path`` option. See :ref:`code_search_path` for more details.
.. _`installation instructions`: http://docs.ray.io/en/latest/installation.html
.. _`installation instructions`: http://docs.ray.io/en/master/installation.html
Starting Ray on each machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -119,10 +119,6 @@ extensions = [
versionwarning_admonition_type = "tip"
versionwarning_messages = {
"master": (
"This document is for the master branch. "
'Visit the <a href="/en/latest/">latest pip release documentation here</a>.'
),
"latest": (
"This document is for the latest pip release. "
'Visit the <a href="/en/master/">master branch documentation here</a>.'

View file

@ -97,4 +97,4 @@ This will print any ``RAY_LOG(DEBUG)`` lines in the source code to the
.. _`issues`: https://github.com/ray-project/ray/issues
.. _`Temporary Files`: http://docs.ray.io/en/latest/tempfile.html
.. _`Temporary Files`: http://docs.ray.io/en/master/tempfile.html

View file

@ -137,7 +137,7 @@ You can view information for Ray objects in the memory tab. It is useful to debu
One common cause of these memory errors is that there are objects which never go out of scope. In order to find these, you can go to the Memory View, then select to "Group By Stack Trace." This groups memory entries by their stack traces up to three frames deep. If you see a group which is growing without bound, you might want to examine that line of code to see if you intend to keep that reference around.
Note that this is the same information as displayed in the `ray memory command <https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory>`_. For details about the information contained in the table, please see the `ray memory` documentation.
Note that this is the same information as displayed in the `ray memory command <https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory>`_. For details about the information contained in the table, please see the `ray memory` documentation.
Inspect Memory Usage
~~~~~~~~~~~~~~~~~~~~
@ -283,7 +283,7 @@ Memory
**Object Size** Object Size of a Ray object in bytes.
**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command <https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory>`_ to learn each reference type.
**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command <https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory>`_ to learn each reference type.
**Call Site**: Call site where this Ray object is referenced, up to three stack frames deep.

View file

@ -262,7 +262,7 @@ Deep Deterministic Policy Gradients (DDPG, TD3)
-----------------------------------------------
|pytorch| |tensorflow|
`[paper] <https://arxiv.org/abs/1509.02971>`__ `[implementation] <https://github.com/ray-project/ray/blob/master/rllib/agents/ddpg/ddpg.py>`__
DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 <https://spinningup.openai.com/en/latest/algorithms/td3.html>`__ are available as ``TD3``.
DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 <https://spinningup.openai.com/en/master/algorithms/td3.html>`__ are available as ``TD3``.
.. figure:: dqn-arch.svg

View file

@ -4,7 +4,7 @@ Contributing to RLlib
Development Install
-------------------
You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py <https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py>`__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/latest/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master <https://github.com/ray-project/ray>`__ and have the latest `wheel <https://docs.ray.io/en/latest/installation.html>`__ installed.)
You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py <https://github.com/ray-project/ray/blob/master/python/ray/setup-dev.py>`__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/master/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master <https://github.com/ray-project/ray>`__ and have the latest `wheel <https://docs.ray.io/en/master/installation.html>`__ installed.)
API Stability
-------------

View file

@ -123,5 +123,5 @@ Community Examples
Example of using the multi-agent API to model several `social dilemma games <https://arxiv.org/abs/1702.03037>`__.
- `StarCraft2 <https://github.com/oxwhirl/smac>`__:
Example of training in StarCraft2 maps with RLlib / multi-agent.
- `Traffic Flow <https://berkeleyflow.readthedocs.io/en/latest/flow_setup.html>`__:
- `Traffic Flow <https://berkeleyflow.readthedocs.io/en/master/flow_setup.html>`__:
Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.

View file

@ -105,7 +105,7 @@ on what Ray functionalities we use, let us see what cProfile's output might look
like if our example involved Actors (for an introduction to Ray actors, see our
`Actor documentation here`_).
.. _`Actor documentation here`: http://docs.ray.io/en/latest/actors.html
.. _`Actor documentation here`: http://docs.ray.io/en/master/actors.html
Now, instead of looping over five calls to a remote function like in ``ex1``,
let's create a new example and loop over five calls to a remote function

View file

@ -1,6 +1,6 @@
## About
Default docker images for [Ray](https://github.com/ray-project/ray)! This includes
everything needed to get started with running Ray! They work for both local development and *are ideal* for use with the [Ray Cluster Launcher](https://docs.ray.io/en/latest/cluster/launcher.html). [Find the Dockerfile here.](https://github.com/ray-project/ray/blob/master/docker/ray/Dockerfile)
everything needed to get started with running Ray! They work for both local development and *are ideal* for use with the [Ray Cluster Launcher](https://docs.ray.io/en/master/cluster/launcher.html). [Find the Dockerfile here.](https://github.com/ray-project/ray/blob/master/docker/ray/Dockerfile)

View file

@ -7,7 +7,7 @@
"project": "ray",
// The project's homepage
"project_url": "http://docs.ray.io/en/latest/index.html",
"project_url": "http://docs.ray.io/en/master/index.html",
// The URL or local path of the source code repository for the
// project being benchmarked

View file

@ -14,7 +14,7 @@ import { sum } from "../../../common/util";
import ActorDetailsPane from "./ActorDetailsPane";
const memoryDebuggingDocLink =
"https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory";
"https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory";
const useActorStyles = makeStyles((theme: Theme) =>
createStyles({

View file

@ -143,7 +143,7 @@ class Tune extends React.Component<
You can use this tab to monitor Tune jobs, their statuses,
hyperparameters, and more. For more information, read the
documentation{" "}
<a href="https://docs.ray.io/en/latest/ray-dashboard.html#tune">
<a href="https://docs.ray.io/en/master/ray-dashboard.html#tune">
here
</a>
.

View file

@ -3,7 +3,7 @@ Tune: Scalable Hyperparameter Tuning
Tune is a scalable framework for hyperparameter search with a focus on deep learning and deep reinforcement learning.
User documentation can be `found here <http://docs.ray.io/en/latest/tune.html>`__.
User documentation can be `found here <http://docs.ray.io/en/master/tune.html>`__.
Tutorial

View file

@ -17,7 +17,7 @@ accurate one. Often simple things like choosing a different learning rate or cha
a network layer size can have a dramatic impact on your model performance.
Fortunately, there are tools that help with finding the best combination of parameters.
`Ray Tune <https://docs.ray.io/en/latest/tune.html>`_ is an industry standard tool for
`Ray Tune <https://docs.ray.io/en/master/tune.html>`_ is an industry standard tool for
distributed hyperparameter tuning. Ray Tune includes the latest hyperparameter search
algorithms, integrates with TensorBoard and other analysis libraries, and natively
supports distributed training through `Ray's distributed machine learning engine

View file

@ -9,7 +9,7 @@ def register_ray():
except ImportError:
msg = ("To use the ray backend you must install ray."
"Try running 'pip install ray'."
"See https://docs.ray.io/en/latest/installation.html"
"See https://docs.ray.io/en/master/installation.html"
"for more information.")
raise ImportError(msg)

View file

@ -3,7 +3,7 @@ Running benchmarks
RaySGD provides comparable or better performance than other existing solutions for parallel or distributed training.
You can run ``ray/python/ray/util/sgd/torch/examples/benchmarks/benchmark.py`` for benchmarking the RaySGD TorchTrainer implementation. To benchmark training on a multi-node multi-gpu cluster, you can use the `Ray Autoscaler <https://docs.ray.io/en/latest/autoscaling.html#aws>`_.
You can run ``ray/python/ray/util/sgd/torch/examples/benchmarks/benchmark.py`` for benchmarking the RaySGD TorchTrainer implementation. To benchmark training on a multi-node multi-gpu cluster, you can use the `Ray Autoscaler <https://docs.ray.io/en/master/autoscaling.html#aws>`_.
DISCLAIMER: RaySGD does not provide any custom communication primitives. If you see any performance issues, you may need to file them on the PyTorch github repository.

View file

@ -3,7 +3,7 @@ RLlib: Scalable Reinforcement Learning
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.
For an overview of RLlib, see the [documentation](http://docs.ray.io/en/latest/rllib.html).
For an overview of RLlib, see the [documentation](http://docs.ray.io/en/master/rllib.html).
If you've found RLlib useful for your research, you can cite the [paper](https://arxiv.org/abs/1712.09381) as follows:

View file

@ -3,6 +3,6 @@ Policy Gradient (PG)
An implementation of a vanilla policy gradient algorithm for TensorFlow and PyTorch.
**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#pg)**
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#pg)**
**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/pg/pg.py)**

View file

@ -5,6 +5,6 @@ Implementations of:
Soft Actor-Critic Algorithm (SAC) and a discrete action extension.
**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#sac)**
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#sac)**
**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/sac/sac.py)**

View file

@ -6,7 +6,7 @@ This file defines the distributed Trainer class for the soft actor critic
algorithm.
See `sac_[tf|torch]_policy.py` for the definition of the policy loss.
Detailed documentation: https://docs.ray.io/en/latest/rllib-algorithms.html#sac
Detailed documentation: https://docs.ray.io/en/master/rllib-algorithms.html#sac
"""
import logging

View file

@ -1,3 +1,3 @@
Contributed algorithms, which can be run via ``rllib train --run=contrib/<alg_name>``
See https://docs.ray.io/en/latest/rllib-dev.html for guidelines.
See https://docs.ray.io/en/master/rllib-dev.html for guidelines.