diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index b362808a2..46429eec4 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -19,4 +19,4 @@ Please provide a script that can be run to reproduce the issue. The script shoul
If we cannot run your script, we cannot fix your issue.
- [ ] I have verified my script runs in a clean environment and reproduces the issue.
-- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
+- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/master/installation.html).
diff --git a/README.rst b/README.rst
index 79dc6992b..52a826841 100644
--- a/README.rst
+++ b/README.rst
@@ -1,7 +1,7 @@
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
-.. image:: https://readthedocs.org/projects/ray/badge/?version=latest
- :target: http://docs.ray.io/en/latest/?badge=latest
+.. image:: https://readthedocs.org/projects/ray/badge/?version=master
+ :target: http://docs.ray.io/en/master/?badge=master
.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
:target: https://forms.gle/9TSdDYUgxYs8SA9e8
@@ -15,7 +15,7 @@ Ray is packaged with the following libraries for accelerating machine learning w
- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
-- `RaySGD `__: Distributed Training Wrappers
+- `RaySGD `__: Distributed Training Wrappers
- `Ray Serve`_: Scalable and Programmable Serving
There are also many `community integrations `_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here `_.
@@ -78,7 +78,7 @@ Ray programs can run on a single machine, and can also seamlessly scale to large
``ray submit [CLUSTER.YAML] example.py --start``
-Read more about `launching clusters `_.
+Read more about `launching clusters `_.
Tune Quick Start
----------------
@@ -140,10 +140,10 @@ If TensorBoard is installed, automatically visualize all trial results:
tensorboard --logdir ~/ray_results
-.. _`Tune`: https://docs.ray.io/en/latest/tune.html
-.. _`Population Based Training (PBT)`: https://docs.ray.io/en/latest/tune-schedulers.html#population-based-training-pbt
-.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/latest/tune-schedulers.html#median-stopping-rule
-.. _`HyperBand/ASHA`: https://docs.ray.io/en/latest/tune-schedulers.html#asynchronous-hyperband
+.. _`Tune`: https://docs.ray.io/en/master/tune.html
+.. _`Population Based Training (PBT)`: https://docs.ray.io/en/master/tune-schedulers.html#population-based-training-pbt
+.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/master/tune-schedulers.html#median-stopping-rule
+.. _`HyperBand/ASHA`: https://docs.ray.io/en/master/tune-schedulers.html#asynchronous-hyperband
RLlib Quick Start
-----------------
@@ -189,7 +189,7 @@ RLlib Quick Start
"num_workers": 4,
"env_config": {"corridor_length": 5}})
-.. _`RLlib`: https://docs.ray.io/en/latest/rllib.html
+.. _`RLlib`: https://docs.ray.io/en/master/rllib.html
Ray Serve Quick Start
@@ -264,7 +264,7 @@ This example runs serves a scikit-learn gradient boosting classifier.
# }
-.. _`Ray Serve`: https://docs.ray.io/en/latest/serve/index.html
+.. _`Ray Serve`: https://docs.ray.io/en/master/serve/index.html
More Information
----------------
@@ -282,7 +282,7 @@ More Information
- `Ray HotOS paper`_
- `Blog (old)`_
-.. _`Documentation`: http://docs.ray.io/en/latest/index.html
+.. _`Documentation`: http://docs.ray.io/en/master/index.html
.. _`Tutorial`: https://github.com/ray-project/tutorial
.. _`Blog (old)`: https://ray-project.github.io/
.. _`Blog`: https://medium.com/distributed-computing-with-ray
diff --git a/dashboard/client/src/pages/dashboard/logical-view/ActorDetailsPane.tsx b/dashboard/client/src/pages/dashboard/logical-view/ActorDetailsPane.tsx
index ea970fd58..888afd749 100644
--- a/dashboard/client/src/pages/dashboard/logical-view/ActorDetailsPane.tsx
+++ b/dashboard/client/src/pages/dashboard/logical-view/ActorDetailsPane.tsx
@@ -13,7 +13,7 @@ import { sum } from "../../../common/util";
import ActorStateRepr from "./ActorStateRepr";
const memoryDebuggingDocLink =
- "https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory";
+ "https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory";
type ActorDatum = {
label: string;
diff --git a/dashboard/client/src/pages/dashboard/tune/Tune.tsx b/dashboard/client/src/pages/dashboard/tune/Tune.tsx
index 08dd287cc..305e430d8 100644
--- a/dashboard/client/src/pages/dashboard/tune/Tune.tsx
+++ b/dashboard/client/src/pages/dashboard/tune/Tune.tsx
@@ -143,7 +143,7 @@ class Tune extends React.Component<
You can use this tab to monitor Tune jobs, their statuses,
hyperparameters, and more. For more information, read the
documentation{" "}
-
+
here
.
diff --git a/doc/examples/cython/ray-project/project.yaml b/doc/examples/cython/ray-project/project.yaml
index 484b32408..9fff50815 100644
--- a/doc/examples/cython/ray-project/project.yaml
+++ b/doc/examples/cython/ray-project/project.yaml
@@ -4,7 +4,7 @@ name: ray-example-cython
description: "Example of how to use Cython with ray"
tags: ["ray-example", "cython"]
-documentation: https://docs.ray.io/en/latest/advanced.html#cython-code-in-ray
+documentation: https://docs.ray.io/en/master/advanced.html#cython-code-in-ray
cluster:
config: ray-project/cluster.yaml
diff --git a/doc/examples/lbfgs/ray-project/project.yaml b/doc/examples/lbfgs/ray-project/project.yaml
index 8803e355f..5aa581fef 100644
--- a/doc/examples/lbfgs/ray-project/project.yaml
+++ b/doc/examples/lbfgs/ray-project/project.yaml
@@ -4,7 +4,7 @@ name: ray-example-lbfgs
description: "Parallelizing the L-BFGS algorithm in ray"
tags: ["ray-example", "optimization", "lbfgs"]
-documentation: https://docs.ray.io/en/latest/auto_examples/plot_lbfgs.html
+documentation: https://docs.ray.io/en/master/auto_examples/plot_lbfgs.html
cluster:
config: ray-project/cluster.yaml
diff --git a/doc/examples/newsreader/ray-project/project.yaml b/doc/examples/newsreader/ray-project/project.yaml
index 5fe360a7f..f061db0bd 100644
--- a/doc/examples/newsreader/ray-project/project.yaml
+++ b/doc/examples/newsreader/ray-project/project.yaml
@@ -4,7 +4,7 @@ name: ray-example-newsreader
description: "A simple news reader example that uses ray actors to serve requests"
tags: ["ray-example", "flask", "rss", "newsreader"]
-documentation: https://docs.ray.io/en/latest/auto_examples/plot_newsreader.html
+documentation: https://docs.ray.io/en/master/auto_examples/plot_newsreader.html
cluster:
config: ray-project/cluster.yaml
diff --git a/doc/examples/overview.rst b/doc/examples/overview.rst
index d23bdcadd..738923e96 100644
--- a/doc/examples/overview.rst
+++ b/doc/examples/overview.rst
@@ -90,7 +90,7 @@ Machine Learning Examples
Reinforcement Learning Examples
-------------------------------
-These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib `__.
+These are simple examples that show you how to leverage Ray Core. For Ray's production-grade reinforcement learning library, see `RLlib `__.
.. raw:: html
diff --git a/doc/examples/plot_example-a3c.rst b/doc/examples/plot_example-a3c.rst
index 40eef7e75..789f7dcfd 100644
--- a/doc/examples/plot_example-a3c.rst
+++ b/doc/examples/plot_example-a3c.rst
@@ -13,7 +13,7 @@ View the `code for this example`_.
.. note::
- For an overview of Ray's reinforcement learning library, see `RLlib `__.
+ For an overview of Ray's reinforcement learning library, see `RLlib `__.
To run the application, first install **ray** and then some dependencies:
diff --git a/doc/examples/plot_hyperparameter.py b/doc/examples/plot_hyperparameter.py
index d0a130f76..48318fd9d 100644
--- a/doc/examples/plot_hyperparameter.py
+++ b/doc/examples/plot_hyperparameter.py
@@ -16,7 +16,7 @@ their results to be ready.
hyperparameter tuning, use `Tune`_, a scalable hyperparameter
tuning library built using Ray's Actor API.
-.. _`Tune`: https://docs.ray.io/en/latest/tune.html
+.. _`Tune`: https://docs.ray.io/en/master/tune.html
Setup: Dependencies
-------------------
diff --git a/doc/examples/plot_streaming.rst b/doc/examples/plot_streaming.rst
index babb77c9e..4eef5c429 100644
--- a/doc/examples/plot_streaming.rst
+++ b/doc/examples/plot_streaming.rst
@@ -87,7 +87,7 @@ the top 10 words in these articles together with their word count:
Note that this examples uses `distributed actor handles`_, which are still
considered experimental.
-.. _`distributed actor handles`: http://docs.ray.io/en/latest/actors.html
+.. _`distributed actor handles`: http://docs.ray.io/en/master/actors.html
There is a ``Mapper`` actor, which has a method ``get_range`` used to retrieve
word counts for words in a certain range:
diff --git a/doc/site/_posts/2017-05-17-announcing-ray.markdown b/doc/site/_posts/2017-05-17-announcing-ray.markdown
index a8cec8713..888072440 100644
--- a/doc/site/_posts/2017-05-17-announcing-ray.markdown
+++ b/doc/site/_posts/2017-05-17-announcing-ray.markdown
@@ -8,7 +8,7 @@ date: 2017-05-20 14:00:00
This post announces Ray, a framework for efficiently running Python code on
clusters and large multi-core machines. The project is open source.
You can check out [the code](https://github.com/ray-project/ray) and
-[the documentation](http://docs.ray.io/en/latest/?badge=latest).
+[the documentation](http://docs.ray.io/en/master/?badge=latest).
Many AI algorithms are computationally intensive and exhibit complex
communication patterns. As a result, many researchers spend most of their
diff --git a/doc/site/_posts/2017-09-30-ray-0.2-release.markdown b/doc/site/_posts/2017-09-30-ray-0.2-release.markdown
index 45a649aa5..329091be1 100644
--- a/doc/site/_posts/2017-09-30-ray-0.2-release.markdown
+++ b/doc/site/_posts/2017-09-30-ray-0.2-release.markdown
@@ -134,12 +134,12 @@ state of the actor. We are working on improving the speed of recovery by
enabling actor state to be restored from checkpoints. See [an overview of fault
tolerance in Ray][4].
-[1]: http://docs.ray.io/en/latest/plasma-object-store.html
-[2]: http://docs.ray.io/en/latest/webui.html
-[3]: http://docs.ray.io/en/latest/rllib.html
-[4]: http://docs.ray.io/en/latest/fault-tolerance.html
+[1]: http://docs.ray.io/en/master/plasma-object-store.html
+[2]: http://docs.ray.io/en/master/webui.html
+[3]: http://docs.ray.io/en/master/rllib.html
+[4]: http://docs.ray.io/en/master/fault-tolerance.html
[5]: https://github.com/apache/arrow
-[6]: http://docs.ray.io/en/latest/example-a3c.html
+[6]: http://docs.ray.io/en/master/example-a3c.html
[7]: https://github.com/openai/baselines
[8]: https://github.com/ray-project/ray/blob/b020e6bf1fb00d0745371d8674146d4a5b75d9f0/python/ray/rllib/test/tuned_examples.sh#L11
[9]: https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization
diff --git a/doc/site/_posts/2017-10-15-fast-python-serialization-with-ray-and-arrow.markdown b/doc/site/_posts/2017-10-15-fast-python-serialization-with-ray-and-arrow.markdown
index aaae043e9..fc7edc420 100644
--- a/doc/site/_posts/2017-10-15-fast-python-serialization-with-ray-and-arrow.markdown
+++ b/doc/site/_posts/2017-10-15-fast-python-serialization-with-ray-and-arrow.markdown
@@ -271,7 +271,7 @@ for i in range(len(test_objects)):
plot(*benchmark_object(test_objects[i]), titles[i], i)
```
-[1]: http://docs.ray.io/en/latest/index.html
+[1]: http://docs.ray.io/en/master/index.html
[2]: https://arrow.apache.org/
[3]: https://en.wikipedia.org/wiki/Serialization
[4]: https://github.com/cloudpipe/cloudpickle/
diff --git a/doc/site/_posts/2017-11-30-ray-0.3-release.markdown b/doc/site/_posts/2017-11-30-ray-0.3-release.markdown
index 7404d8874..875b61d50 100644
--- a/doc/site/_posts/2017-11-30-ray-0.3-release.markdown
+++ b/doc/site/_posts/2017-11-30-ray-0.3-release.markdown
@@ -134,14 +134,14 @@ This feature is still considered experimental, but we've already found
distributed actor handles useful for implementing [**parameter server**][10] and
[**streaming MapReduce**][11] applications.
-[1]: http://docs.ray.io/en/latest/actors.html#passing-around-actor-handles-experimental
-[2]: http://docs.ray.io/en/latest/tune.html
-[3]: http://docs.ray.io/en/latest/rllib.html
+[1]: http://docs.ray.io/en/master/actors.html#passing-around-actor-handles-experimental
+[2]: http://docs.ray.io/en/master/tune.html
+[3]: http://docs.ray.io/en/master/rllib.html
[4]: https://research.google.com/pubs/pub46180.html
[5]: https://arxiv.org/abs/1603.06560
[6]: https://www.tensorflow.org/get_started/summaries_and_tensorboard
[7]: https://media.readthedocs.org/pdf/rllab/latest/rllab.pdf
[8]: https://en.wikipedia.org/wiki/Parallel_coordinates
[9]: https://github.com/ray-project/ray/tree/master/python/ray/tune
-[10]: http://docs.ray.io/en/latest/example-parameter-server.html
-[11]: http://docs.ray.io/en/latest/example-streaming.html
+[10]: http://docs.ray.io/en/master/example-parameter-server.html
+[11]: http://docs.ray.io/en/master/example-streaming.html
diff --git a/doc/site/_posts/2018-03-27-ray-0.4-release.markdown b/doc/site/_posts/2018-03-27-ray-0.4-release.markdown
index a354b176e..1e311cc75 100644
--- a/doc/site/_posts/2018-03-27-ray-0.4-release.markdown
+++ b/doc/site/_posts/2018-03-27-ray-0.4-release.markdown
@@ -78,10 +78,10 @@ Training][9].
[1]: https://github.com/ray-project/ray
[2]: https://rise.cs.berkeley.edu/blog/pandas-on-ray/
-[3]: http://docs.ray.io/en/latest/rllib.html
-[4]: http://docs.ray.io/en/latest/tune.html
+[3]: http://docs.ray.io/en/master/rllib.html
+[4]: http://docs.ray.io/en/master/tune.html
[5]: https://rise.cs.berkeley.edu/blog/distributed-policy-optimizers-for-scalable-and-reproducible-deep-rl/
-[6]: http://docs.ray.io/en/latest/resources.html
+[6]: http://docs.ray.io/en/master/resources.html
[7]: https://pandas.pydata.org/
[8]: https://arxiv.org/abs/1803.00933
-[9]: http://docs.ray.io/en/latest/pbt.html
+[9]: http://docs.ray.io/en/master/pbt.html
diff --git a/doc/site/_posts/2018-07-06-ray-0.5-release.markdown b/doc/site/_posts/2018-07-06-ray-0.5-release.markdown
index e4a03a1af..bf16f32fc 100644
--- a/doc/site/_posts/2018-07-06-ray-0.5-release.markdown
+++ b/doc/site/_posts/2018-07-06-ray-0.5-release.markdown
@@ -76,8 +76,8 @@ Ray now supports Java thanks to contributions from [Ant Financial][4]:
[1]: https://github.com/ray-project/ray
-[2]: http://docs.ray.io/en/latest/rllib.html
-[3]: http://docs.ray.io/en/latest/tune.html
+[2]: http://docs.ray.io/en/master/rllib.html
+[3]: http://docs.ray.io/en/master/tune.html
[4]: https://www.antfin.com/
[5]: https://github.com/modin-project/modin
-[6]: http://docs.ray.io/en/latest/autoscaling.html
+[6]: http://docs.ray.io/en/master/autoscaling.html
diff --git a/doc/site/_posts/2018-07-15-parameter-server-in-fifteen-lines.markdown b/doc/site/_posts/2018-07-15-parameter-server-in-fifteen-lines.markdown
index 29ae8be9c..aac240334 100644
--- a/doc/site/_posts/2018-07-15-parameter-server-in-fifteen-lines.markdown
+++ b/doc/site/_posts/2018-07-15-parameter-server-in-fifteen-lines.markdown
@@ -321,12 +321,12 @@ Questions should be directed to *ray-dev@googlegroups.com*.
[1]: https://github.com/ray-project/ray
-[2]: http://docs.ray.io/en/latest/resources.html
+[2]: http://docs.ray.io/en/master/resources.html
[3]: http://www.sysml.cc/doc/206.pdf
-[4]: http://docs.ray.io/en/latest/rllib.html
-[5]: http://docs.ray.io/en/latest/tune.html
-[6]: http://docs.ray.io/en/latest
-[7]: http://docs.ray.io/en/latest/api.html
+[4]: http://docs.ray.io/en/master/rllib.html
+[5]: http://docs.ray.io/en/master/tune.html
+[6]: http://docs.ray.io/en/master
+[7]: http://docs.ray.io/en/master/api.html
[8]: https://github.com/modin-project/modin
[9]: https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html
[10]: https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
diff --git a/doc/site/get_ray.html b/doc/site/get_ray.html
index a741d2486..a8a77b6f6 100644
--- a/doc/site/get_ray.html
+++ b/doc/site/get_ray.html
@@ -25,7 +25,7 @@ layout: default
diff --git a/doc/site/index.html b/doc/site/index.html
index bde1d613c..0737b2dd3 100644
--- a/doc/site/index.html
+++ b/doc/site/index.html
@@ -33,6 +33,6 @@ layout: default
- To get started, visit the Ray Project web site, documentation, GitHub project, or Tutorials.
+ To get started, visit the Ray Project web site, documentation, GitHub project, or Tutorials.
diff --git a/doc/source/cluster/cloud.rst b/doc/source/cluster/cloud.rst
index 0e00db54c..b9a6c4bcd 100644
--- a/doc/source/cluster/cloud.rst
+++ b/doc/source/cluster/cloud.rst
@@ -302,4 +302,4 @@ Now that you have a working understanding of the cluster launcher, check out:
Questions or Issues?
--------------------
-.. include:: /_help.rst
\ No newline at end of file
+.. include:: /_help.rst
diff --git a/doc/source/cluster/index.rst b/doc/source/cluster/index.rst
index b69cb5134..529c4993d 100644
--- a/doc/source/cluster/index.rst
+++ b/doc/source/cluster/index.rst
@@ -76,7 +76,7 @@ on each machine. To install Ray, follow the `installation instructions`_.
To configure the Ray cluster to run Java code, you need to add the ``--code-search-path`` option. See :ref:`code_search_path` for more details.
-.. _`installation instructions`: http://docs.ray.io/en/latest/installation.html
+.. _`installation instructions`: http://docs.ray.io/en/master/installation.html
Starting Ray on each machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/conf.py b/doc/source/conf.py
index f327767cf..c3439949a 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -119,10 +119,6 @@ extensions = [
versionwarning_admonition_type = "tip"
versionwarning_messages = {
- "master": (
- "This document is for the master branch. "
- 'Visit the latest pip release documentation here.'
- ),
"latest": (
"This document is for the latest pip release. "
'Visit the master branch documentation here.'
diff --git a/doc/source/debugging.rst b/doc/source/debugging.rst
index c7d015790..8b12f1f02 100644
--- a/doc/source/debugging.rst
+++ b/doc/source/debugging.rst
@@ -97,4 +97,4 @@ This will print any ``RAY_LOG(DEBUG)`` lines in the source code to the
.. _`issues`: https://github.com/ray-project/ray/issues
-.. _`Temporary Files`: http://docs.ray.io/en/latest/tempfile.html
+.. _`Temporary Files`: http://docs.ray.io/en/master/tempfile.html
diff --git a/doc/source/ray-dashboard.rst b/doc/source/ray-dashboard.rst
index b7b31d9e2..20c283154 100644
--- a/doc/source/ray-dashboard.rst
+++ b/doc/source/ray-dashboard.rst
@@ -137,7 +137,7 @@ You can view information for Ray objects in the memory tab. It is useful to debu
One common cause of these memory errors is that there are objects which never go out of scope. In order to find these, you can go to the Memory View, then select to "Group By Stack Trace." This groups memory entries by their stack traces up to three frames deep. If you see a group which is growing without bound, you might want to examine that line of code to see if you intend to keep that reference around.
-Note that this is the same information as displayed in the `ray memory command `_. For details about the information contained in the table, please see the `ray memory` documentation.
+Note that this is the same information as displayed in the `ray memory command `_. For details about the information contained in the table, please see the `ray memory` documentation.
Inspect Memory Usage
~~~~~~~~~~~~~~~~~~~~
@@ -283,7 +283,7 @@ Memory
**Object Size** Object Size of a Ray object in bytes.
-**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command `_ to learn each reference type.
+**Reference Type**: Reference types of Ray objects. Checkout the `ray memory command `_ to learn each reference type.
**Call Site**: Call site where this Ray object is referenced, up to three stack frames deep.
diff --git a/doc/source/rllib-algorithms.rst b/doc/source/rllib-algorithms.rst
index 9cbfff9fb..4068dce49 100644
--- a/doc/source/rllib-algorithms.rst
+++ b/doc/source/rllib-algorithms.rst
@@ -262,7 +262,7 @@ Deep Deterministic Policy Gradients (DDPG, TD3)
-----------------------------------------------
|pytorch| |tensorflow|
`[paper] `__ `[implementation] `__
-DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 `__ are available as ``TD3``.
+DDPG is implemented similarly to DQN (below). The algorithm can be scaled by increasing the number of workers or using Ape-X. The improvements from `TD3 `__ are available as ``TD3``.
.. figure:: dqn-arch.svg
diff --git a/doc/source/rllib-dev.rst b/doc/source/rllib-dev.rst
index 2c917f9e8..a28a50442 100644
--- a/doc/source/rllib-dev.rst
+++ b/doc/source/rllib-dev.rst
@@ -4,7 +4,7 @@ Contributing to RLlib
Development Install
-------------------
-You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py `__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/latest/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master `__ and have the latest `wheel `__ installed.)
+You can develop RLlib locally without needing to compile Ray by using the `setup-dev.py `__ script. This sets up links between the ``rllib`` dir in your git repo and the one bundled with the ``ray`` package. However if you have installed ray from source using [these instructions](https://docs.ray.io/en/master/installation.html) then do not this as these steps should have already created this symlink. When using this script, make sure that your git branch is in sync with the installed Ray binaries (i.e., you are up-to-date on `master `__ and have the latest `wheel `__ installed.)
API Stability
-------------
diff --git a/doc/source/rllib-examples.rst b/doc/source/rllib-examples.rst
index 0f70a536a..9764644a0 100644
--- a/doc/source/rllib-examples.rst
+++ b/doc/source/rllib-examples.rst
@@ -123,5 +123,5 @@ Community Examples
Example of using the multi-agent API to model several `social dilemma games `__.
- `StarCraft2 `__:
Example of training in StarCraft2 maps with RLlib / multi-agent.
-- `Traffic Flow `__:
+- `Traffic Flow `__:
Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.
diff --git a/doc/source/troubleshooting.rst b/doc/source/troubleshooting.rst
index 73ec9a2f9..e8d8657dd 100644
--- a/doc/source/troubleshooting.rst
+++ b/doc/source/troubleshooting.rst
@@ -105,7 +105,7 @@ on what Ray functionalities we use, let us see what cProfile's output might look
like if our example involved Actors (for an introduction to Ray actors, see our
`Actor documentation here`_).
-.. _`Actor documentation here`: http://docs.ray.io/en/latest/actors.html
+.. _`Actor documentation here`: http://docs.ray.io/en/master/actors.html
Now, instead of looping over five calls to a remote function like in ``ex1``,
let's create a new example and loop over five calls to a remote function
diff --git a/docker/ray/README.md b/docker/ray/README.md
index deca5499e..dce6068f6 100644
--- a/docker/ray/README.md
+++ b/docker/ray/README.md
@@ -1,6 +1,6 @@
## About
Default docker images for [Ray](https://github.com/ray-project/ray)! This includes
-everything needed to get started with running Ray! They work for both local development and *are ideal* for use with the [Ray Cluster Launcher](https://docs.ray.io/en/latest/cluster/launcher.html). [Find the Dockerfile here.](https://github.com/ray-project/ray/blob/master/docker/ray/Dockerfile)
+everything needed to get started with running Ray! They work for both local development and *are ideal* for use with the [Ray Cluster Launcher](https://docs.ray.io/en/master/cluster/launcher.html). [Find the Dockerfile here.](https://github.com/ray-project/ray/blob/master/docker/ray/Dockerfile)
diff --git a/python/asv.conf.json b/python/asv.conf.json
index 1e49f712f..ea15dad52 100644
--- a/python/asv.conf.json
+++ b/python/asv.conf.json
@@ -7,7 +7,7 @@
"project": "ray",
// The project's homepage
- "project_url": "http://docs.ray.io/en/latest/index.html",
+ "project_url": "http://docs.ray.io/en/master/index.html",
// The URL or local path of the source code repository for the
// project being benchmarked
diff --git a/python/ray/dashboard/client/src/pages/dashboard/logical-view/Actor.tsx b/python/ray/dashboard/client/src/pages/dashboard/logical-view/Actor.tsx
index c715db1a8..bf82b97f7 100644
--- a/python/ray/dashboard/client/src/pages/dashboard/logical-view/Actor.tsx
+++ b/python/ray/dashboard/client/src/pages/dashboard/logical-view/Actor.tsx
@@ -14,7 +14,7 @@ import { sum } from "../../../common/util";
import ActorDetailsPane from "./ActorDetailsPane";
const memoryDebuggingDocLink =
- "https://docs.ray.io/en/latest/memory-management.html#debugging-using-ray-memory";
+ "https://docs.ray.io/en/master/memory-management.html#debugging-using-ray-memory";
const useActorStyles = makeStyles((theme: Theme) =>
createStyles({
diff --git a/python/ray/dashboard/client/src/pages/dashboard/tune/Tune.tsx b/python/ray/dashboard/client/src/pages/dashboard/tune/Tune.tsx
index b5f0bb3be..2c070e04b 100644
--- a/python/ray/dashboard/client/src/pages/dashboard/tune/Tune.tsx
+++ b/python/ray/dashboard/client/src/pages/dashboard/tune/Tune.tsx
@@ -143,7 +143,7 @@ class Tune extends React.Component<
You can use this tab to monitor Tune jobs, their statuses,
hyperparameters, and more. For more information, read the
documentation{" "}
-
+
here
.
diff --git a/python/ray/tune/README.rst b/python/ray/tune/README.rst
index badbfd3df..649af11cf 100644
--- a/python/ray/tune/README.rst
+++ b/python/ray/tune/README.rst
@@ -3,7 +3,7 @@ Tune: Scalable Hyperparameter Tuning
Tune is a scalable framework for hyperparameter search with a focus on deep learning and deep reinforcement learning.
-User documentation can be `found here `__.
+User documentation can be `found here `__.
Tutorial
diff --git a/python/ray/tune/tests/ext_pytorch.py b/python/ray/tune/tests/ext_pytorch.py
index 5b9db0193..7f0abf989 100644
--- a/python/ray/tune/tests/ext_pytorch.py
+++ b/python/ray/tune/tests/ext_pytorch.py
@@ -17,7 +17,7 @@ accurate one. Often simple things like choosing a different learning rate or cha
a network layer size can have a dramatic impact on your model performance.
Fortunately, there are tools that help with finding the best combination of parameters.
-`Ray Tune `_ is an industry standard tool for
+`Ray Tune `_ is an industry standard tool for
distributed hyperparameter tuning. Ray Tune includes the latest hyperparameter search
algorithms, integrates with TensorBoard and other analysis libraries, and natively
supports distributed training through `Ray's distributed machine learning engine
diff --git a/python/ray/util/joblib/__init__.py b/python/ray/util/joblib/__init__.py
index 3e58f51cb..e6bb730da 100644
--- a/python/ray/util/joblib/__init__.py
+++ b/python/ray/util/joblib/__init__.py
@@ -9,7 +9,7 @@ def register_ray():
except ImportError:
msg = ("To use the ray backend you must install ray."
"Try running 'pip install ray'."
- "See https://docs.ray.io/en/latest/installation.html"
+ "See https://docs.ray.io/en/master/installation.html"
"for more information.")
raise ImportError(msg)
diff --git a/python/ray/util/sgd/torch/examples/benchmarks/README.rst b/python/ray/util/sgd/torch/examples/benchmarks/README.rst
index 566645cc0..78dd71a15 100644
--- a/python/ray/util/sgd/torch/examples/benchmarks/README.rst
+++ b/python/ray/util/sgd/torch/examples/benchmarks/README.rst
@@ -3,7 +3,7 @@ Running benchmarks
RaySGD provides comparable or better performance than other existing solutions for parallel or distributed training.
-You can run ``ray/python/ray/util/sgd/torch/examples/benchmarks/benchmark.py`` for benchmarking the RaySGD TorchTrainer implementation. To benchmark training on a multi-node multi-gpu cluster, you can use the `Ray Autoscaler `_.
+You can run ``ray/python/ray/util/sgd/torch/examples/benchmarks/benchmark.py`` for benchmarking the RaySGD TorchTrainer implementation. To benchmark training on a multi-node multi-gpu cluster, you can use the `Ray Autoscaler `_.
DISCLAIMER: RaySGD does not provide any custom communication primitives. If you see any performance issues, you may need to file them on the PyTorch github repository.
diff --git a/rllib/README.md b/rllib/README.md
index 0c89e5eb3..c39601d07 100644
--- a/rllib/README.md
+++ b/rllib/README.md
@@ -3,7 +3,7 @@ RLlib: Scalable Reinforcement Learning
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.
-For an overview of RLlib, see the [documentation](http://docs.ray.io/en/latest/rllib.html).
+For an overview of RLlib, see the [documentation](http://docs.ray.io/en/master/rllib.html).
If you've found RLlib useful for your research, you can cite the [paper](https://arxiv.org/abs/1712.09381) as follows:
diff --git a/rllib/agents/pg/README.md b/rllib/agents/pg/README.md
index 407435ce5..6812434f6 100644
--- a/rllib/agents/pg/README.md
+++ b/rllib/agents/pg/README.md
@@ -3,6 +3,6 @@ Policy Gradient (PG)
An implementation of a vanilla policy gradient algorithm for TensorFlow and PyTorch.
-**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#pg)**
+**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#pg)**
**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/pg/pg.py)**
diff --git a/rllib/agents/sac/README.md b/rllib/agents/sac/README.md
index bacb48a15..8aa0c4c45 100644
--- a/rllib/agents/sac/README.md
+++ b/rllib/agents/sac/README.md
@@ -5,6 +5,6 @@ Implementations of:
Soft Actor-Critic Algorithm (SAC) and a discrete action extension.
-**[Detailed Documentation](https://docs.ray.io/en/latest/rllib-algorithms.html#sac)**
+**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#sac)**
**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/sac/sac.py)**
diff --git a/rllib/agents/sac/sac.py b/rllib/agents/sac/sac.py
index 72aac5977..daf66f88a 100644
--- a/rllib/agents/sac/sac.py
+++ b/rllib/agents/sac/sac.py
@@ -6,7 +6,7 @@ This file defines the distributed Trainer class for the soft actor critic
algorithm.
See `sac_[tf|torch]_policy.py` for the definition of the policy loss.
-Detailed documentation: https://docs.ray.io/en/latest/rllib-algorithms.html#sac
+Detailed documentation: https://docs.ray.io/en/master/rllib-algorithms.html#sac
"""
import logging
diff --git a/rllib/contrib/README.rst b/rllib/contrib/README.rst
index 7df36dcf2..6b48234ce 100644
--- a/rllib/contrib/README.rst
+++ b/rllib/contrib/README.rst
@@ -1,3 +1,3 @@
Contributed algorithms, which can be run via ``rllib train --run=contrib/``
-See https://docs.ray.io/en/latest/rllib-dev.html for guidelines.
+See https://docs.ray.io/en/master/rllib-dev.html for guidelines.