[Serve] Rename RayServe -> "Ray Serve" in Documentation (#8504)

This commit is contained in:
Bill Chambers 2020-05-19 19:13:54 -07:00 committed by GitHub
parent 85cb721f19
commit f8f7efc24f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 40 additions and 40 deletions

View file

@ -280,12 +280,12 @@ Getting Involved
.. toctree:: .. toctree::
:maxdepth: -1 :maxdepth: -1
:caption: RayServe :caption: Ray Serve
rayserve/overview.rst serve/overview.rst
rayserve/tutorials/tensorflow-tutorial.rst serve/tutorials/tensorflow-tutorial.rst
rayserve/tutorials/pytorch-tutorial.rst serve/tutorials/pytorch-tutorial.rst
rayserve/tutorials/sklearn-tutorial.rst serve/tutorials/sklearn-tutorial.rst
.. toctree:: .. toctree::
:maxdepth: -1 :maxdepth: -1

View file

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View file

@ -1,7 +1,7 @@
.. _rayserve: .. _rayserve:
RayServe: Scalable and Programmable Serving Ray Serve: Scalable and Programmable Serving
=========================================== ============================================
.. image:: logo.svg .. image:: logo.svg
:align: center :align: center
@ -13,21 +13,21 @@ RayServe: Scalable and Programmable Serving
Overview Overview
-------- --------
RayServe is a scalable model-serving library built on Ray. Ray Serve is a scalable model-serving library built on Ray.
For users RayServe is: For users Ray Serve is:
- **Framework Agnostic**:Use the same toolkit to serve everything from deep learning models - **Framework Agnostic**:Use the same toolkit to serve everything from deep learning models
built with frameworks like PyTorch or TensorFlow to scikit-learn models or arbitrary business logic. built with frameworks like PyTorch or TensorFlow to scikit-learn models or arbitrary business logic.
- **Python First**: Configure your model serving with pure Python code - no more YAMLs or - **Python First**: Configure your model serving with pure Python code - no more YAMLs or
JSON configs. JSON configs.
RayServe enables: Ray Serve enables:
- **A/B test models** with zero downtime by decoupling routing logic from response handling logic. - **A/B test models** with zero downtime by decoupling routing logic from response handling logic.
- **Batching** built-in to help you meet your performance objectives. - **Batching** built-in to help you meet your performance objectives.
Since Ray is built on Ray, RayServe also allows you to **scale to many machines** Since Ray is built on Ray, Ray Serve also allows you to **scale to many machines**
and allows you to leverage all of the other Ray frameworks so you can deploy and scale on any cloud. and allows you to leverage all of the other Ray frameworks so you can deploy and scale on any cloud.
.. note:: .. note::
@ -37,7 +37,7 @@ and allows you to leverage all of the other Ray frameworks so you can deploy and
Installation Installation
~~~~~~~~~~~~ ~~~~~~~~~~~~
RayServe supports Python versions 3.5 and higher. To install RayServe: Ray Serve supports Python versions 3.5 and higher. To install Ray Serve:
.. code-block:: bash .. code-block:: bash
@ -45,8 +45,8 @@ RayServe supports Python versions 3.5 and higher. To install RayServe:
RayServe in 90 Seconds Ray Serve in 90 Seconds
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Serve a stateless function: Serve a stateless function:
@ -56,10 +56,10 @@ Serve a stateful class:
.. literalinclude:: ../../../python/ray/serve/examples/doc/quickstart_class.py .. literalinclude:: ../../../python/ray/serve/examples/doc/quickstart_class.py
See :ref:`serve-key-concepts` for more information about working with RayServe. See :ref:`serve-key-concepts` for more information about working with Ray Serve.
Why RayServe? Why Ray Serve?
~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
There are generally two ways of serving machine learning applications, both with serious limitations: There are generally two ways of serving machine learning applications, both with serious limitations:
you can build using a **traditional webserver** - your own Flask app or you can use a cloud hosted solution. you can build using a **traditional webserver** - your own Flask app or you can use a cloud hosted solution.
@ -68,24 +68,24 @@ The first approach is easy to get started with, but it's hard to scale each comp
requires vendor lock-in (SageMaker), framework specific tooling (TFServing), and a general requires vendor lock-in (SageMaker), framework specific tooling (TFServing), and a general
lack of flexibility. lack of flexibility.
RayServe solves these problems by giving a user the ability to leverage the simplicity Ray Serve solves these problems by giving a user the ability to leverage the simplicity
of deployment of a simple webserver but handles the complex routing, scaling, and testing logic of deployment of a simple webserver but handles the complex routing, scaling, and testing logic
necessary for production deployments. necessary for production deployments.
For more on the motivation behind RayServe, check out these `meetup slides <https://tinyurl.com/serve-meetup>`_. For more on the motivation behind Ray Serve, check out these `meetup slides <https://tinyurl.com/serve-meetup>`_.
When should I use Ray Serve? When should I use Ray Serve?
++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++
RayServe should be used when you need to deploy at least one model, preferrably many models. Ray Serve should be used when you need to deploy at least one model, preferrably many models.
RayServe **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray </multiprocessing.html>`_. Ray Serve **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray </multiprocessing.html>`_.
.. _serve-key-concepts: .. _serve-key-concepts:
Key Concepts Key Concepts
------------ ------------
RayServe focuses on **simplicity** and only has two core concepts: endpoints and backends. Ray Serve focuses on **simplicity** and only has two core concepts: endpoints and backends.
To follow along, you'll need to make the necessary imports. To follow along, you'll need to make the necessary imports.
@ -128,9 +128,9 @@ Once you define the function (or class) that will handle a request.
You'd use a function when your response is stateless and a class when you You'd use a function when your response is stateless and a class when you
might need to maintain some state (like a model). might need to maintain some state (like a model).
For both functions and classes (that take as input Flask Requests), you'll need to For both functions and classes (that take as input Flask Requests), you'll need to
define them as backends to RayServe. define them as backends to Ray Serve.
It's important to note that RayServe places these backends in individual workers, which are replicas of the model. It's important to note that Ray Serve places these backends in individual workers, which are replicas of the model.
.. code-block:: python .. code-block:: python
@ -229,7 +229,7 @@ It's trivial to also split traffic, simply specify the endpoint and the backends
Batching Batching
++++++++ ++++++++
You can also have RayServe batch requests for performance. You'll configure this in the backend config. You can also have Ray Serve batch requests for performance. You'll configure this in the backend config.
.. code-block:: python .. code-block:: python
@ -298,7 +298,7 @@ Other Resources
Frameworks Frameworks
~~~~~~~~~~ ~~~~~~~~~~
RayServe makes it easy to deploy models from all popular frameworks. Ray Serve makes it easy to deploy models from all popular frameworks.
Learn more about how to deploy your model in the following tutorials: Learn more about how to deploy your model in the following tutorials:
- :ref:`Tensorflow & Keras <serve-tensorflow-tutorial>` - :ref:`Tensorflow & Keras <serve-tensorflow-tutorial>`

View file

@ -9,16 +9,16 @@ In particular, we show:
- How to load the model from PyTorch's pre-trained modelzoo. - How to load the model from PyTorch's pre-trained modelzoo.
- How to parse the JSON request, transform the payload and evaluated in the model. - How to parse the JSON request, transform the payload and evaluated in the model.
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe. Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
This tutorial requires Pytorch and Torchvision installed in your system. RayServe This tutorial requires Pytorch and Torchvision installed in your system. Ray Serve
is :ref:`framework agnostic <serve_frameworks>` and work with any version of PyTorch. is :ref:`framework agnostic <serve_frameworks>` and work with any version of PyTorch.
.. code-block:: bash .. code-block:: bash
pip install torch torchvision pip install torch torchvision
Let's import RayServe and some other helpers. Let's import Ray Serve and some other helpers.
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_pytorch.py .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_pytorch.py
:start-after: __doc_import_begin__ :start-after: __doc_import_begin__
@ -32,7 +32,7 @@ The ``__call__`` method will be invoked per request.
:start-after: __doc_define_servable_begin__ :start-after: __doc_define_servable_begin__
:end-before: __doc_define_servable_end__ :end-before: __doc_define_servable_end__
Now that we've defined our services, let's deploy the model to RayServe. We will Now that we've defined our services, let's deploy the model to Ray Serve. We will
define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together. :ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.

View file

@ -6,18 +6,18 @@ Scikit-Learn Tutorial
In this guide, we will train and deploy a simple Scikit-Learn classifier. In this guide, we will train and deploy a simple Scikit-Learn classifier.
In particular, we show: In particular, we show:
- How to load the model from file system in your RayServe definition - How to load the model from file system in your Ray Serve definition
- How to parse the JSON request and evaluated in sklearn model - How to parse the JSON request and evaluated in sklearn model
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe. Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
RayServe supports :ref:`arbitrary frameworks <serve_frameworks>`. You can use any version of sklearn. Ray Serve supports :ref:`arbitrary frameworks <serve_frameworks>`. You can use any version of sklearn.
.. code-block:: bash .. code-block:: bash
pip install scikit-learn pip install scikit-learn
Let's import RayServe and some other helpers. Let's import Ray Serve and some other helpers.
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_sklearn.py .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_sklearn.py
:start-after: __doc_import_begin__ :start-after: __doc_import_begin__
@ -36,7 +36,7 @@ The ``__call__`` method will be invoked per request.
:start-after: __doc_define_servable_begin__ :start-after: __doc_define_servable_begin__
:end-before: __doc_define_servable_end__ :end-before: __doc_define_servable_end__
Now that we've defined our services, let's deploy the model to RayServe. We will Now that we've defined our services, let's deploy the model to Ray Serve. We will
define an :ref:`endpoint <serve-endpoint>` for the route representing the classifier task, a define an :ref:`endpoint <serve-endpoint>` for the route representing the classifier task, a
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together. :ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.

View file

@ -6,12 +6,12 @@ Keras and Tensorflow Tutorial
In this guide, we will train and deploy a simple Tensorflow neural net. In this guide, we will train and deploy a simple Tensorflow neural net.
In particular, we show: In particular, we show:
- How to load the model from file system in your RayServe definition - How to load the model from file system in your Ray Serve definition
- How to parse the JSON request and evaluated in Tensorflow - How to parse the JSON request and evaluated in Tensorflow
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe. Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
RayServe makes it easy to deploy models from :ref:`all popular frameworks <serve_frameworks>`. Ray Serve makes it easy to deploy models from :ref:`all popular frameworks <serve_frameworks>`.
However, for this tutorial, we use Tensorflow 2 and Keras. Please make sure you have However, for this tutorial, we use Tensorflow 2 and Keras. Please make sure you have
Tensorflow 2 installed. Tensorflow 2 installed.
@ -20,7 +20,7 @@ Tensorflow 2 installed.
pip install "tensorflow>=2.0" pip install "tensorflow>=2.0"
Let's import RayServe and some other helpers. Let's import Ray Serve and some other helpers.
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_tensorflow.py .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_tensorflow.py
:start-after: __doc_import_begin__ :start-after: __doc_import_begin__
@ -39,7 +39,7 @@ The ``__call__`` method will be invoked per request.
:start-after: __doc_define_servable_begin__ :start-after: __doc_define_servable_begin__
:end-before: __doc_define_servable_end__ :end-before: __doc_define_servable_end__
Now that we've defined our services, let's deploy the model to RayServe. We will Now that we've defined our services, let's deploy the model to Ray Serve. We will
define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together. :ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.