![]() This is a down scoped change. For the full overview picture of Tune control loop, see [`Tune control loop refactoring`](https://docs.google.com/document/d/1RDsW7SVzwMPZfA0WLOPA4YTqbRyXIHGYmBenJk33HaE/edit#heading=h.2za3bbxbs5gn) 1. Previously there are separate waits on pg ready and other events. As a result, there are quite a few timing tweaks that are inefficient, hard to understand and unit test. This PR consolidates into a single wait that is handled by TrialRunner in each step. - A few event types are introduced, and their mapping into scenarios * PG_READY --> Should place a trial onto it. If somehow there is no trial to be placed there, the pg will be put in _ready momentarily. This is due to historically resources is conceptualized as a pull based model. * NO_RUNNING_TRIALS_TIME_OUT --> possibly not sufficient resources case * TRAINING_RESULT * SAVING_RESULT * RESTORING_RESULT * YIELD --> This just means that simply taking very long to train. We need to punt back to the main loop to print out status info etc. 2. Previously TrialCleanup is not very efficient and can be racing between Trainable.stop() and `return_placement_group`. This PR streamlines the Trial cleanup process by explicitly let Trainable.stop() to finish followed by `return_placement_group(pg)`. Note, graceful shutdown is needed in cases like `pause_trial` where checkpointing to memory needs to be given the time to happen before the actor is gone. 3. There are quite some env variables removed (timing tweaks), that I consider OK to proceed without deprecation cycle. |
||
---|---|---|
.. | ||
azure | ||
kubernetes | ||
source | ||
tools | ||
yarn | ||
.gitignore | ||
BUILD | ||
make.bat | ||
Makefile | ||
README.md | ||
requirements-doc.txt | ||
requirements-rtd.txt | ||
test_myst_doc.py |
Ray Documentation
Repository for documentation of the Ray project, hosted at docs.ray.io.
Installation
To build the documentation, make sure you have ray
installed first.
For building the documentation locally install the following dependencies:
pip install -r requirements-doc.txt
Building the documentation
To compile the documentation and open it locally, run the following command from this directory.
make html && open _build/html/index.html
Building just one sub-project
Often your changes in documentation just concern one sub-project, such as Tune or Train. To build just this one sub-project, and ignore the rest (leading to build warnings due to broken references etc.), run the following command:
DOC_LIB=<project> sphinx-build -b html -d _build/doctrees source _build/html
where <project>
is the name of the sub-project and can be any of the docs projects in the source/
directory either called tune
, rllib
, train
, cluster
, serve
, raysgd
, data
or the ones starting
with ray-
, e.g. ray-observability
.
Announcements and includes
To add new announcements and other messaging to the top or bottom of a documentation page,
check the _includes
folder first to see if the message you want is already there (like "get help"
or "we're hiring" etc.)
If not, add the template you want and include it accordingly, i.e. with
.. include:: /_includes/<my-announcement>
This ensures consistent messaging across documentation pages.
Checking for broken links
To check if there are broken links, run the following (we are currently not running this in the CI since there are false positives).
make linkcheck
Running doctests
To run tests for examples shipping with docstrings in Python files, run the following command:
make doctest
Adding examples as MyST Markdown Notebooks
You can now add executable notebooks to this project,
which will get built into the documentation.
An example can be found here.
By default, building the docs with make html
will not run those notebooks.
If you set the RUN_NOTEBOOKS
environment variable to "cache"
, each notebook cell will be run when you build the documentation, and outputs will be cached into _build/.jupyter_cache
.
RUN_NOTEBOOKS="cache" make html
To force re-running the notebooks, use RUN_NOTEBOOKS="force"
.
Using caching, this means the first time you build the documentation, it might take a while to run the notebooks. After that, notebook execution is only triggered when you change the notebook source file.
The benefits of working with notebooks for examples are that you don't separate the code from the documentation, but can still easily smoke-test the code.