[docs] Deploying Ray (#16538)

Co-authored-by: Alex Wu <alex@anyscale.com>
This commit is contained in:
Alex Wu 2021-06-19 10:07:15 -07:00 committed by GitHub
parent 16d762aed0
commit 197dab0e2f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 397 additions and 42 deletions

View file

@ -9,9 +9,14 @@ See the :ref:`Cluster Configuration <cluster-config>` docs on how to customize t
Launching a cluster (``ray up``)
--------------------------------
This will start up the machines in the cloud, install your dependencies and run any setup commands that you have, configure the Ray cluster automatically, and prepare you to scale your distributed system. See :ref:`the documentation <ray-up-doc>` for ``ray up``.
This will start up the machines in the cloud, install your dependencies and run
any setup commands that you have, configure the Ray cluster automatically, and
prepare you to scale your distributed system. See :ref:`the documentation
<ray-up-doc>` for ``ray up``.
.. tip:: The worker nodes will start only after the head node has finished starting. To monitor the progress of the cluster setup, you can run `ray monitor <cluster yaml>`.
.. tip:: The worker nodes will start only after the head node has finished
starting. To monitor the progress of the cluster setup, you can run
`ray monitor <cluster yaml>`.
.. code-block:: shell
@ -29,24 +34,42 @@ Updating an existing cluster (``ray up``)
If you want to update your cluster configuration (add more files, change dependencies), run ``ray up`` again on the existing cluster.
This command checks if the local configuration differs from the applied configuration of the cluster. This includes any changes to synced files specified in the ``file_mounts`` section of the config. If so, the new files and config will be uploaded to the cluster. Following that, Ray services/processes will be restarted.
This command checks if the local configuration differs from the applied
configuration of the cluster. This includes any changes to synced files
specified in the ``file_mounts`` section of the config. If so, the new files
and config will be uploaded to the cluster. Following that, Ray
services/processes will be restarted.
.. tip:: Don't do this for the cloud provider specifications (e.g., change from AWS to GCP on a running cluster) or change the cluster name (as this will just start a new cluster and orphan the original one).
.. tip:: Don't do this for the cloud provider specifications (e.g., change from
AWS to GCP on a running cluster) or change the cluster name (as this
will just start a new cluster and orphan the original one).
You can also run ``ray up`` to restart a cluster if it seems to be in a bad state (this will restart all Ray services even if there are no config changes).
You can also run ``ray up`` to restart a cluster if it seems to be in a bad
state (this will restart all Ray services even if there are no config changes).
Running ``ray up`` on an existing cluster will do all the following:
* If the head node matches the cluster specification, the filemounts will be reapplied and the ``setup_commands`` and ``ray start`` commands will be run. There may be some caching behavior here to skip setup/file mounts.
* If the head node is out of date from the specified YAML (e.g., ``head_node_type`` has changed on the YAML), then the out of date node will be terminated and a new node will be provisioned to replace it. Setup/file mounts/``ray start`` will be applied.
* After the head node reaches a consistent state (after ``ray start`` commands are finished), the same above procedure will be applied to all the worker nodes. The ``ray start`` commands tend to run a ``ray stop`` + ``ray start``, so this will kill currently working jobs.
* If the head node matches the cluster specification, the filemounts will be
reapplied and the ``setup_commands`` and ``ray start`` commands will be run.
There may be some caching behavior here to skip setup/file mounts.
* If the head node is out of date from the specified YAML (e.g.,
``head_node_type`` has changed on the YAML), then the out of date node will
be terminated and a new node will be provisioned to replace it. Setup/file
mounts/``ray start`` will be applied.
* After the head node reaches a consistent state (after ``ray start`` commands
are finished), the same above procedure will be applied to all the worker
nodes. The ``ray start`` commands tend to run a ``ray stop`` + ``ray start``,
so this will kill currently working jobs.
If you don't want the update to restart services (e.g., because the changes don't require a restart), pass ``--no-restart`` to the update call.
If you don't want the update to restart services (e.g., because the changes
don't require a restart), pass ``--no-restart`` to the update call.
If you want to force re-generation of the config to pick up possible changes in the cloud environment, pass ``--no-config-cache`` to the update call.
If you want to force re-generation of the config to pick up possible changes in
the cloud environment, pass ``--no-config-cache`` to the update call.
If you want to skip the setup commands and only run ``ray stop``/``ray start`` on all nodes, pass ``--restart-only`` to the update call.
If you want to skip the setup commands and only run ``ray stop``/``ray start``
on all nodes, pass ``--restart-only`` to the update call.
See :ref:`the documentation <ray-up-doc>` for ``ray up``.
@ -83,18 +106,25 @@ You can use ``ray exec`` to conveniently run commands on clusters. Note that pyt
# Run a command in a screen (experimental)
$ ray exec cluster.yaml 'echo "hello world"' --screen
If you want to run applications on the cluster that are accessible from a web browser (e.g., Jupyter notebook), you can use the ``--port-forward``. The local port opened is the same as the remote port.
If you want to run applications on the cluster that are accessible from a web
browser (e.g., Jupyter notebook), you can use the ``--port-forward``. The local
port opened is the same as the remote port.
.. code-block:: shell
$ ray exec cluster.yaml --port-forward=8899 'source ~/anaconda3/bin/activate tensorflow_p36 && jupyter notebook --port=8899'
.. note:: For Kubernetes clusters, the ``port-forward`` option cannot be used while executing a command. To port forward and run a command you need to call ``ray exec`` twice separately.
.. note:: For Kubernetes clusters, the ``port-forward`` option cannot be used
while executing a command. To port forward and run a command you need
to call ``ray exec`` twice separately.
Running Ray scripts on the cluster (``ray submit``)
---------------------------------------------------
You can also use ``ray submit`` to execute Python scripts on clusters. This will ``rsync`` the designated file onto the head node cluster and execute it with the given arguments. See :ref:`the documentation <ray-submit-doc>` for ``ray submit``.
You can also use ``ray submit`` to execute Python scripts on clusters. This
will ``rsync`` the designated file onto the head node cluster and execute it
with the given arguments. See :ref:`the documentation <ray-submit-doc>` for
``ray submit``.
.. code-block:: shell
@ -110,7 +140,9 @@ You can also use ``ray submit`` to execute Python scripts on clusters. This will
Attaching to a running cluster (``ray attach``)
-----------------------------------------------
You can use ``ray attach`` to attach to an interactive screen session on the cluster. See :ref:`the documentation <ray-attach-doc>` for ``ray attach`` or run ``ray attach --help``.
You can use ``ray attach`` to attach to an interactive screen session on the
cluster. See :ref:`the documentation <ray-attach-doc>` for ``ray attach`` or
run ``ray attach --help``.
.. code-block:: shell
@ -127,7 +159,8 @@ You can use ``ray attach`` to attach to an interactive screen session on the clu
Synchronizing files from the cluster (``ray rsync-up/down``)
------------------------------------------------------------
To download or upload files to the cluster head node, use ``ray rsync_down`` or ``ray rsync_up``:
To download or upload files to the cluster head node, use ``ray rsync_down`` or
``ray rsync_up``:
.. code-block:: shell
@ -136,30 +169,44 @@ To download or upload files to the cluster head node, use ``ray rsync_down`` or
.. _monitor-cluster:
Monitoring cluster status (``ray dashboard/monitor``)
Monitoring cluster status (``ray dashboard/status``)
-----------------------------------------------------
The ray also comes with an online dashboard. The dashboard is accessible via HTTP on the head node (by default it listens on ``localhost:8265``). You can also use the built-in ``ray dashboard`` to do this automatically.
The Ray also comes with an online dashboard. The dashboard is accessible via
HTTP on the head node (by default it listens on ``localhost:8265``). You can
also use the built-in ``ray dashboard`` to do this automatically.
.. code-block:: shell
$ ray dashboard cluster.yaml
You can monitor cluster usage and auto-scaling status by tailing the autoscaling
logs in ``/tmp/ray/session_*/logs/monitor*``.
You can monitor cluster usage and auto-scaling status by running (on the head node):
.. code-block:: shell
$ ray monitor cluster.yaml
$ ray status
The Ray autoscaler also reports per-node status in the form of instance tags. In your cloud provider console, you can click on a Node, go to the "Tags" pane, and add the ``ray-node-status`` tag as a column. This lets you see per-node statuses at a glance:
To see live updates to the status:
.. code-block:: shell
$ watch -n 1 ray status
The Ray autoscaler also reports per-node status in the form of instance tags.
In your cloud provider console, you can click on a Node, go to the "Tags" pane,
and add the ``ray-node-status`` tag as a column. This lets you see per-node
statuses at a glance:
.. image:: /images/autoscaler-status.png
Common Workflow: Syncing git branches
-------------------------------------
A common use case is syncing a particular local git branch to all workers of the cluster. However, if you just put a `git checkout <branch>` in the setup commands, the autoscaler won't know when to rerun the command to pull in updates. There is a nice workaround for this by including the git SHA in the input (the hash of the file will change if the branch is updated):
A common use case is syncing a particular local git branch to all workers of
the cluster. However, if you just put a `git checkout <branch>` in the setup
commands, the autoscaler won't know when to rerun the command to pull in
updates. There is a nice workaround for this by including the git SHA in the
input (the hash of the file will change if the branch is updated):
.. code-block:: yaml
@ -171,7 +218,11 @@ A common use case is syncing a particular local git branch to all workers of the
- test -e <REPO_NAME> || git clone https://github.com/<REPO_ORG>/<REPO_NAME>.git
- cd <REPO_NAME> && git fetch && git checkout `cat /tmp/current_branch_sha`
This tells ``ray up`` to sync the current git branch SHA from your personal computer to a temporary file on the cluster (assuming you've pushed the branch head already). Then, the setup commands read that file to figure out which SHA they should checkout on the nodes. Note that each command runs in its own session. The final workflow to update the cluster then becomes just this:
This tells ``ray up`` to sync the current git branch SHA from your personal
computer to a temporary file on the cluster (assuming you've pushed the branch
head already). Then, the setup commands read that file to figure out which SHA
they should checkout on the nodes. Note that each command runs in its own
session. The final workflow to update the cluster then becomes just this:
1. Make local changes to a git branch
2. Commit the changes with ``git commit`` and ``git push``

View file

@ -0,0 +1,295 @@
.. _deployment-guide:
Ray Deployment Guide
====================
This page provides an overview of how to deploy a multi-node Ray cluster, including how to:
* Launch the cluster.
* Set up the autoscaler.
* Monitor a multi-node cluster.
* Best practices for setting up a Ray cluster.
Launching a Ray cluster
-----------------------
There 2 recommended ways of launching a Ray cluster are via:
1. :ref:`The cluster launcher <cluster-cloud>`
2. :ref:`The kubernetes operator <Ray-operator>`
Cluster Launcher
^^^^^^^^^^^^^^^^
The goal of :ref:`the cluster launcher <cluster-cloud>` is to make it easy to deploy a Ray cluster on
any cloud. It will:
* provision a new instance/machine using the cloud provider's SDK.
* execute shell commands to set up Ray with the provided options.
* (optionally) run any custom, user defined setup commands.
* Initialize the Ray cluster.
* Deploy an autoscaler process.
Kubernetes Operator
^^^^^^^^^^^^^^^^^^^
The :ref:`k8s operator <Ray-operator>` serves a very similar purpose to the
cluster launcher, but follows the `kubernetes operator pattern
<https://kubernetes.io/docs/concepts/extend-kubernetes/operator>`__. It's
defined as :ref:`a helm chart <Ray-helm>` and will:
* Create a controller (the operator).
* Provision a new pod (head node).
* execute shell commands to set up Ray with the provided options.
* (optionally) run any custom, user defined setup commands.
* Initialize the Ray cluster.
The operator will then serve as the autoscaler.
Autoscaling with Ray
--------------------
Ray is designed to support highly elastic workloads which are most efficient on
an autoscaling cluster. At a high level, the autoscaler attempts to
launch/terminate nodes in order to ensure that workloads have sufficient
resources to run, while minimizing the idle resources.
It does this by taking into consideration:
* User specified hard limits (min/max workers).
* User specified node types (nodes in a Ray cluster do _not_ have to be
homogenous).
* Information from the Ray core's scheduling layer about the current resource
usage/demands of the cluster.
* Programmatic autoscaling hints.
Take a look at :ref:`the cluster reference <cluster-config>` to learn more
about configuring the autoscaler.
How does it work?
^^^^^^^^^^^^^^^^^
The Ray Cluster Launcher will automatically enable a load-based autoscaler. The
autoscaler resource demand scheduler will look at the pending tasks, actors,
and placement groups resource demands from the cluster, and try to add the
minimum list of nodes that can fulfill these demands. When worker nodes are
idle for more than :ref:`idle_timeout_minutes
<cluster-configuration-idle-timeout-minutes>`, they will be removed (the head
node is never removed unless the cluster is torn down).
Autoscaler uses a simple binpacking algorithm to binpack the user demands into
the available cluster resources. The remaining unfulfilled demands are placed
on the smallest list of nodes that satisfies the demand while maximizing
utilization (starting from the smallest node).
**Here is "A Glimpse into the Ray Autoscaler" and how to debug/monitor your cluster:**
2021-19-01 by Ameer Haj-Ali, Anyscale Inc.
.. youtube:: BJ06eJasdu4
Deploying an application
------------------------
The recommended way of connecting to a Ray cluster is to use the
``Ray.client.connect()`` API and connect via the Ray Client.
.. note::
Using ``Ray.client.connect()`` is generally a best practice because it allows
you to test your code locally, and deploy to a cluster with **no code
changes**.
To connect via Ray Client, set the ``RAY_ADDRESS`` environment variable to the
address of the Ray client server.
:ref:`Learn more about setting up the Ray client server here <Ray-client>`.
.. note::
When deploying an application, the job will be killed if the driver
disconnects.
:ref:`A detached actor <actor-lifetimes>` can be used to avoid having a long running driver.
Monitoring and observability
----------------------------
Ray comes with 3 main observability features:
1. :ref:`The dashboard <Ray-dashboard>`
2. :ref:`Ray status <monitor-cluster>`
3. :ref:`Prometheus metrics <multi-node-metrics>`
Monitoring the cluster via the dashboard
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:ref:`The dashboard provides detailed information about the state of the cluster <Ray-dashboard>`,
including the running jobs, actors, workers, nodes, etc.
By default, the cluster launcher and operator will launch the dashboard, but
not publicly expose it.
If you launch your application via the cluster launcher, you can securely
portforward local traffic to the dashboard via the ``Ray dashboard`` command
(which establishes an SSH tunnel). The dashboard will now be visible at
``http://localhost:8265``.
With the kubernetes operator, you will need to expose port 8265 on the head
node, or use `kubectl to portforward
<https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/>`__.
Observing the autoscaler
^^^^^^^^^^^^^^^^^^^^^^^^
The autoscaler makes decisions by scheduling information, and programmatic
information from the cluster. This information, along with the status of
starting nodes, can be accessed via the ``ray status`` command.
To dump the current state of a cluster launched via the cluster launcher, you
can run ``Ray exec cluster.yaml "Ray status"``.
For a more "live" monitoring experience, it is recommended that you run ``Ray
status`` in a watch loop: ``Ray exec cluster.yaml "watch -n 1 Ray status"``.
With the kubernetes operator, you should replace ``Ray exec cluster.yaml`` with
``kubectl exec <head node pod>``.
Prometheus metrics
^^^^^^^^^^^^^^^^^^
Ray is capable of producing prometheus metrics. When enabled, Ray produces some
metrics about the Ray core, and some internal metrics by default. It also
supports custom, user-defined metrics.
These metrics can be consumed by any metrics infrastructure which can ingest
metrics from the prometheus server on the head node of the cluster.
:ref:`Learn more about setting up prometheus here. <multi-node-metrics>`
Best practices for deploying large clusters
-------------------------------------------
This section aims to document best practices for deploying Ray clusters at
large scale.
Networking configuration
^^^^^^^^^^^^^^^^^^^^^^^^
End users should only need to directly interact with the head node of the
cluster. In particular, there are 2 services which should be exposed to users:
1. The dashboard
2. The Ray client server
.. note::
While users only need 2 ports to connect to a cluster, the nodes within a
cluster require a much wider range of ports to communicate.
See :ref:`Ray port configuration <Ray-ports>` for a comprehensive list.
Applications (such as :ref:`Ray Serve <Rayserve>`) may also require
additional ports to work properly.
System configuration
^^^^^^^^^^^^^^^^^^^^
There are a few system level configurations that should be set when using Ray
at a large scale.
* Make sure ``ulimit -n`` is set to at least 65535. Ray opens many direct
connections between worker processes to avoid bottlenecks, so it can quickly
use a large number of file descriptors.
* Make sure ``/dev/shm`` is sufficiently large. Most ML/RL applications rely
heavily on the plasma store. By default, Ray will try to use ``/dev/shm`` for
the object store, but if it is not large enough (i.e. ``--object-store-memory``
> size of ``/dev/shm``), Ray will write the plasma store to disk instead, which
may cause significant performance problems.
* Use NVMe SSDs (or other high perforfmance storage) if possible. If
:ref:`object spilling <object-spilling>` is enabled Ray will spill objects to
disk if necessary. This is most commonly needed for data processing
workloads.
Configuring the head node
^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to the above changes, when deploying a large cluster, Ray's
architecture means that the head node will have extra stress due to GCS.
* Make sure the head node has sufficient bandwidth. The most heavily stressed
resource on the head node is outbound bandwidth. For large clusters (see the
scalability envelope), we recommend using machines networking characteristics
at least as good as an r5dn.16xlarge on AWS EC2.
* Set ``resources: {"CPU": 0}`` on the head node. Due to the heavy networking
load (and the GCS and redis processes), we recommend setting the number of
CPUs to 0 ohn the head node to avoid scheduling additional tasks on it.
Configuring the autoscaler
^^^^^^^^^^^^^^^^^^^^^^^^^^
For large, long running clusters, there are a few parameters that can be tuned.
* Ensure your quotas for node types are set correctly.
* For long running clusters, set the ``AUTOSCALER_MAX_RETRIES`` environment
variable to a large number (or ``inf``) to avoid unexpected autoscaler
crashes. (Note: you may want a separate mechanism to detect if the autoscaler
errors too often).
* For large clusters, consider tuning ``upscaling_speed`` for faster
autoscaling.
Picking nodes
^^^^^^^^^^^^^
Here are some tips for how to set your ``available_node_types`` for a cluster,
using AWS instance types as a concrete example.
General recommendations with AWS instance types:
**When to use GPUs**
* If youre using some RL/ML framework
* Youre doing something with tensorflow/pytorch/jax (some framework that can
leverage GPUs well)
**What type of GPU?**
* The latest gen GPU is almost always the best bang for your buck (p3 > p2, g4
> g3), for most well designed applications the performance outweighs the
price (the instance price may be higher, but youll use the instance for less
time.
* You may want to consider using older instances if youre doing dev work and
wont actually fully utilize the GPUs though.
* If youre doing training (ML or RL), you should use a P instance. If youre
doing inference, you should use a G instance. The difference is
processing:VRAM ratio (training requires more memory).
**What type of CPU?**
* Again stick to the latest generation, theyre typically cheaper and faster.
* When in doubt use M instances, they have typically have the highest
availability.
* If you know your application is memory intensive (memory utilization is full,
but cpu is not), go with an R instance
* If you know your application is CPU intensive go with a C instance
* If you have a big cluster, make the head node an instance with an n (r5dn or
c5n)
**How many CPUs/GPUs?**
* Focus on your CPU:GPU ratio first and look at the utilization (Ray dashboard
should help with this). If your CPU utilization is low add GPUs, or vice
versa.
* The exact ratio will be very dependent on your workload.
* Once you find a good ratio, you should be able to scale up and and keep the
same ratio.
* You cant infinitely scale forever. Eventually, as you add more machines your
performance improvements will become sub-linear/not worth it. There may not
be a good one-size fits all strategy at this point.
.. note::
If you're using RLlib, check out :ref:`the RLlib scaling guide
<rllib-scaling-guide>` for RLlib specific recommendations.

View file

@ -6,30 +6,34 @@ Ray Cluster Overview
What is a Ray cluster?
------------------------
One of Ray's strengths is the ability to leverage multiple machines in the same program. Ray can, of course, be run on a single machine (and is done so often), but the real power is using Ray on a cluster of machines.
One of Ray's strengths is the ability to leverage multiple machines in the same
program. Ray can, of course, be run on a single machine (and is done so often),
but the real power is using Ray on a cluster of machines.
A Ray cluster consists of a **head node** and a set of **worker nodes**. The head node needs to be started first, and the worker nodes are given the address of the head node to form the cluster:
A Ray cluster consists of a **head node** and a set of **worker nodes**. The
head node needs to be started first, and the worker nodes are given the address
of the head node to form the cluster:
.. image:: ray-cluster.jpg
:align: center
:width: 600px
You can use the Ray Cluster Launcher to provision machines and launch a multi-node Ray cluster. You can use the cluster launcher :ref:`on AWS, GCP, Azure, Kubernetes, on-premise, and Staroid or even on your custom node provider <cluster-cloud>`. Ray clusters can also make use of the Ray Autoscaler, which allows Ray to interact with a cloud provider to request or release instances following :ref:`a specification <cluster-config>` and according to application workload.
You can use the Ray Cluster Launcher to provision machines and launch a
multi-node Ray cluster. You can use the cluster launcher :ref:`on AWS, GCP,
Azure, Kubernetes, Aliyun, on-premise, and Staroid or even on your custom node provider
<cluster-cloud>`. Ray clusters can also make use of the Ray Autoscaler, which
allows Ray to interact with a cloud provider to request or release instances
following :ref:`a specification <cluster-config>` and according to application
workload.
How does it work?
-----------------
The Ray Cluster Launcher will automatically enable a load-based autoscaler. The autoscaler resource demand scheduler will look at the pending tasks, actors, and placement groups resource demands from the cluster, and try to add the minimum list of nodes that can fulfill these demands. When worker nodes are idle for more than :ref:`idle_timeout_minutes <cluster-configuration-idle-timeout-minutes>`, they will be removed (the head node is never removed unless the cluster is torn down).
Autoscaler uses a simple binpacking algorithm to binpack the user demands into the available cluster resources. The remaining unfulfilled demands are placed on the smallest list of nodes that satisfies the demand while maximizing utilization (starting from the smallest node).
**Here is "A Glimpse into the Ray Autoscaler" and how to debug/monitor your cluster:**
2021-19-01 by Ameer Haj-Ali, Anyscale, Inc.
.. youtube:: BJ06eJasdu4
Next steps
----------
To get started with Ray Clusters, we recommend that you check out the :ref:`Ray Cluster quick start <ref-cluster-quick-start>`. For more advanced examples of use, you can also refer to the :ref:`full specification for Ray Cluster configuration <cluster-config>`.
To get started with Ray Clusters, we recommend that you check out the :ref:`Ray
Cluster quick start <ref-cluster-quick-start>`. For more advanced examples of
use, you can also refer to the :ref:`full specification for Ray Cluster
configuration <cluster-config>`.
To learn about best practices for deploying a Ray cluster, :ref:`check out the
deployment guide <deployment-guide>`.

View file

@ -248,10 +248,11 @@ Papers
.. toctree::
:hidden:
:maxdepth: -1
:caption: Ray Clusters/Autoscaler
:caption: Multi-node Ray
cluster/index.rst
cluster/quickstart.rst
cluster/guide.rst
cluster/reference.rst
cluster/cloud.rst
cluster/ray-client.rst

View file

@ -60,6 +60,8 @@ Next, let's start Prometheus.
Now, you can access Ray metrics from the default Prometheus url, `http://localhost:9090`.
.. _multi-node-metrics:
Getting Started (Multi-nodes)
-----------------------------
Let's now walk through how to import metrics from a Ray cluster.

View file

@ -102,6 +102,8 @@ For synchronous algorithms like PPO and A2C, the driver and workers can make use
Scaling Guide
~~~~~~~~~~~~~
.. _rllib-scaling-guide:
Here are some rules of thumb for scaling training with RLlib.
1. If the environment is slow and cannot be replicated (e.g., since it requires interaction with physical systems), then you should use a sample-efficient off-policy algorithm such as :ref:`DQN <dqn>` or :ref:`SAC <sac>`. These algorithms default to ``num_workers: 0`` for single-process operation. Make sure to set ``num_gpus: 1`` if you want to use a GPU. Consider also batch RL training with the `offline data <rllib-offline.html>`__ API.