diff --git a/doc/source/actors.rst b/doc/source/actors.rst index 960148650..210cd93d6 100644 --- a/doc/source/actors.rst +++ b/doc/source/actors.rst @@ -239,7 +239,7 @@ Actor Pool The ``ray.util`` module contains a utility class, ``ActorPool``. This class is similar to multiprocessing.Pool and lets you schedule Ray tasks over a fixed pool of actors. -.. code-block:: +.. code-block:: python from ray.util import ActorPool diff --git a/doc/source/memory-management.rst b/doc/source/memory-management.rst index d15550f40..8d9fc8f2a 100644 --- a/doc/source/memory-management.rst +++ b/doc/source/memory-management.rst @@ -54,7 +54,7 @@ The ``ray memory`` command can be used to help track down what ``ObjectRef`` ref Running ``ray memory`` from the command line while a Ray application is running will give you a dump of all of the ``ObjectRef`` references that are currently held by the driver, actors, and tasks in the cluster. -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site @@ -85,7 +85,7 @@ There are five types of references that can keep an object pinned: In this example, we create references to two objects: one that is ``ray.put()`` in the object store and another that's the return value from ``f.remote()``. -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site @@ -109,7 +109,7 @@ In the output from ``ray memory``, we can see that each of these is marked as a In this example, we create a ``numpy`` array and then store it in the object store. Then, we fetch the same numpy array from the object store and delete its ``ObjectRef``. In this case, the object is still pinned in the object store because the deserialized copy (stored in ``b``) points directly to the memory in the object store. -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site @@ -134,7 +134,7 @@ The output from ``ray memory`` displays this as the object being ``PINNED_IN_MEM In this example, we first create an object via ``ray.put()`` and then submit a task that depends on the object. -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site @@ -162,7 +162,7 @@ While the task is running, we see that ``ray memory`` shows both a ``LOCAL_REFER In this example, we again create an object via ``ray.put()``, but then pass it to a task wrapped in another object (in this case, a list). -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site @@ -186,7 +186,7 @@ Now, both the driver and the worker process running the task hold a ``LOCAL_REFE In this example, we first create an object via ``ray.put()``, then capture its ``ObjectRef`` inside of another ``ray.put()`` object, and delete the first ``ObjectRef``. In this case, both objects are still pinned. -.. code-block:: +:: ----------------------------------------------------------------------------------------------------- Object Ref Reference Type Object Size Reference Creation Site diff --git a/doc/source/raysgd/raysgd_pytorch.rst b/doc/source/raysgd/raysgd_pytorch.rst index 19160df26..138ef3791 100644 --- a/doc/source/raysgd/raysgd_pytorch.rst +++ b/doc/source/raysgd/raysgd_pytorch.rst @@ -664,7 +664,7 @@ Here's some simple tips on how to debug the TorchTrainer. Try using ``ipdb``, a custom TrainingOperator, and ``num_workers=1``. This will provide you introspection what is being called and when. -.. code-block:: +.. code-block:: python # first run pip install ipdb diff --git a/doc/source/tune/_tutorials/tune-pytorch-cifar.rst b/doc/source/tune/_tutorials/tune-pytorch-cifar.rst index c4e291226..e8b2c5240 100644 --- a/doc/source/tune/_tutorials/tune-pytorch-cifar.rst +++ b/doc/source/tune/_tutorials/tune-pytorch-cifar.rst @@ -245,7 +245,7 @@ The full main function looks like this: If you run the code, an example output could look like this: -.. code-block:: +.. code-block:: bash :emphasize-lines: 7 Number of trials: 10 (10 TERMINATED) diff --git a/doc/source/tune/_tutorials/tune-pytorch-lightning.rst b/doc/source/tune/_tutorials/tune-pytorch-lightning.rst index c2c2b4178..4cb35a2e5 100644 --- a/doc/source/tune/_tutorials/tune-pytorch-lightning.rst +++ b/doc/source/tune/_tutorials/tune-pytorch-lightning.rst @@ -250,7 +250,7 @@ The full code looks like this: In the example above, Tune runs 10 trials with different hyperparameter configurations. An example output could look like so: -.. code-block:: +.. code-block:: bash :emphasize-lines: 12 +------------------------------+------------+-------+----------------+----------------+-------------+--------------+----------+-----------------+----------------------+ @@ -329,7 +329,7 @@ change layer sizes during a training run - which is what would happen in PBT. An example output could look like this: -.. code-block:: +.. code-block:: bash +-----------------------------------------+------------+-------+----------------+----------------+-----------+--------------+-----------+-----------------+----------------------+ | Trial name | status | loc | layer_1_size | layer_2_size | lr | batch_size | loss | mean_accuracy | training_iteration | diff --git a/doc/source/tune/_tutorials/tune-serve-integration-mnist.py b/doc/source/tune/_tutorials/tune-serve-integration-mnist.py index 435c7cbbe..11f42f472 100644 --- a/doc/source/tune/_tutorials/tune-serve-integration-mnist.py +++ b/doc/source/tune/_tutorials/tune-serve-integration-mnist.py @@ -73,9 +73,9 @@ This example will support both modes. After each model selection run, we will tell Ray Serve to serve an updated model. We also include a small utility to query our served model to see if it works as it should. -.. code-block:: +.. code-block:: bash - python tune-serve-integration-mnist.py --query 6 + $ python tune-serve-integration-mnist.py --query 6 Querying model with example #6. Label = 1, Response = 1, Correct = True Imports diff --git a/doc/source/tune/_tutorials/tune-xgboost.rst b/doc/source/tune/_tutorials/tune-xgboost.rst index 6444eadf1..f22641c27 100644 --- a/doc/source/tune/_tutorials/tune-xgboost.rst +++ b/doc/source/tune/_tutorials/tune-xgboost.rst @@ -331,7 +331,7 @@ hyperparameter configurations from this search space. The output of our training run coud look like this: -.. code-block:: +.. code-block:: bash :emphasize-lines: 10 +---------------------------------+------------+-------+-------------+-------------+--------------------+-------------+----------+--------+------------------+ @@ -447,7 +447,7 @@ available in ``env.evaluation_result_list`` below. The output of our run could look like this: -.. code-block:: +.. code-block:: bash :emphasize-lines: 13 +---------------------------------+------------+-------+-------------+-------------+--------------------+-------------+----------+--------+------------------+ diff --git a/python/ray/tune/integration/torch.py b/python/ray/tune/integration/torch.py index d5257931b..64a0bb88f 100644 --- a/python/ray/tune/integration/torch.py +++ b/python/ray/tune/integration/torch.py @@ -161,7 +161,7 @@ def DistributedTrainableCreator(func, Example: - .. code-block:: + .. code-block:: python trainable_cls = DistributedTrainableCreator( train_func, num_workers=2) @@ -211,7 +211,7 @@ def distributed_checkpoint_dir(step, disable=False): again when invoking the training_function. Example: - .. code-block:: + .. code-block:: python def train_func(config, checkpoint_dir): if checkpoint_dir: