mirror of
https://github.com/vale981/ray
synced 2025-03-06 10:31:39 -05:00
[docs] Fix warnings for sphinx 1.8 (#10476)
* fix-build-for-sphinx18 * jnilit
This commit is contained in:
parent
283f4d1060
commit
3f98a8bfcb
8 changed files with 17 additions and 17 deletions
|
@ -239,7 +239,7 @@ Actor Pool
|
|||
The ``ray.util`` module contains a utility class, ``ActorPool``.
|
||||
This class is similar to multiprocessing.Pool and lets you schedule Ray tasks over a fixed pool of actors.
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: python
|
||||
|
||||
from ray.util import ActorPool
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ The ``ray memory`` command can be used to help track down what ``ObjectRef`` ref
|
|||
|
||||
Running ``ray memory`` from the command line while a Ray application is running will give you a dump of all of the ``ObjectRef`` references that are currently held by the driver, actors, and tasks in the cluster.
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
@ -85,7 +85,7 @@ There are five types of references that can keep an object pinned:
|
|||
|
||||
In this example, we create references to two objects: one that is ``ray.put()`` in the object store and another that's the return value from ``f.remote()``.
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
@ -109,7 +109,7 @@ In the output from ``ray memory``, we can see that each of these is marked as a
|
|||
|
||||
In this example, we create a ``numpy`` array and then store it in the object store. Then, we fetch the same numpy array from the object store and delete its ``ObjectRef``. In this case, the object is still pinned in the object store because the deserialized copy (stored in ``b``) points directly to the memory in the object store.
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
@ -134,7 +134,7 @@ The output from ``ray memory`` displays this as the object being ``PINNED_IN_MEM
|
|||
|
||||
In this example, we first create an object via ``ray.put()`` and then submit a task that depends on the object.
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
@ -162,7 +162,7 @@ While the task is running, we see that ``ray memory`` shows both a ``LOCAL_REFER
|
|||
|
||||
In this example, we again create an object via ``ray.put()``, but then pass it to a task wrapped in another object (in this case, a list).
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
@ -186,7 +186,7 @@ Now, both the driver and the worker process running the task hold a ``LOCAL_REFE
|
|||
|
||||
In this example, we first create an object via ``ray.put()``, then capture its ``ObjectRef`` inside of another ``ray.put()`` object, and delete the first ``ObjectRef``. In this case, both objects are still pinned.
|
||||
|
||||
.. code-block::
|
||||
::
|
||||
|
||||
-----------------------------------------------------------------------------------------------------
|
||||
Object Ref Reference Type Object Size Reference Creation Site
|
||||
|
|
|
@ -664,7 +664,7 @@ Here's some simple tips on how to debug the TorchTrainer.
|
|||
|
||||
Try using ``ipdb``, a custom TrainingOperator, and ``num_workers=1``. This will provide you introspection what is being called and when.
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: python
|
||||
|
||||
# first run pip install ipdb
|
||||
|
||||
|
|
|
@ -245,7 +245,7 @@ The full main function looks like this:
|
|||
|
||||
If you run the code, an example output could look like this:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 7
|
||||
|
||||
Number of trials: 10 (10 TERMINATED)
|
||||
|
|
|
@ -250,7 +250,7 @@ The full code looks like this:
|
|||
In the example above, Tune runs 10 trials with different hyperparameter configurations.
|
||||
An example output could look like so:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 12
|
||||
|
||||
+------------------------------+------------+-------+----------------+----------------+-------------+--------------+----------+-----------------+----------------------+
|
||||
|
@ -329,7 +329,7 @@ change layer sizes during a training run - which is what would happen in PBT.
|
|||
|
||||
An example output could look like this:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
+-----------------------------------------+------------+-------+----------------+----------------+-----------+--------------+-----------+-----------------+----------------------+
|
||||
| Trial name | status | loc | layer_1_size | layer_2_size | lr | batch_size | loss | mean_accuracy | training_iteration |
|
||||
|
|
|
@ -73,9 +73,9 @@ This example will support both modes. After each model selection run,
|
|||
we will tell Ray Serve to serve an updated model. We also include a
|
||||
small utility to query our served model to see if it works as it should.
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
python tune-serve-integration-mnist.py --query 6
|
||||
$ python tune-serve-integration-mnist.py --query 6
|
||||
Querying model with example #6. Label = 1, Response = 1, Correct = True
|
||||
|
||||
Imports
|
||||
|
|
|
@ -331,7 +331,7 @@ hyperparameter configurations from this search space.
|
|||
|
||||
The output of our training run coud look like this:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 10
|
||||
|
||||
+---------------------------------+------------+-------+-------------+-------------+--------------------+-------------+----------+--------+------------------+
|
||||
|
@ -447,7 +447,7 @@ available in ``env.evaluation_result_list`` below.
|
|||
|
||||
The output of our run could look like this:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 13
|
||||
|
||||
+---------------------------------+------------+-------+-------------+-------------+--------------------+-------------+----------+--------+------------------+
|
||||
|
|
|
@ -161,7 +161,7 @@ def DistributedTrainableCreator(func,
|
|||
|
||||
Example:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: python
|
||||
|
||||
trainable_cls = DistributedTrainableCreator(
|
||||
train_func, num_workers=2)
|
||||
|
@ -211,7 +211,7 @@ def distributed_checkpoint_dir(step, disable=False):
|
|||
again when invoking the training_function.
|
||||
Example:
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: python
|
||||
|
||||
def train_func(config, checkpoint_dir):
|
||||
if checkpoint_dir:
|
||||
|
|
Loading…
Add table
Reference in a new issue