mirror of
https://github.com/vale981/ray
synced 2025-03-06 02:21:39 -05:00
[Serve] Rename RayServe -> "Ray Serve" in Documentation (#8504)
This commit is contained in:
parent
85cb721f19
commit
f8f7efc24f
6 changed files with 40 additions and 40 deletions
|
@ -280,12 +280,12 @@ Getting Involved
|
|||
|
||||
.. toctree::
|
||||
:maxdepth: -1
|
||||
:caption: RayServe
|
||||
:caption: Ray Serve
|
||||
|
||||
rayserve/overview.rst
|
||||
rayserve/tutorials/tensorflow-tutorial.rst
|
||||
rayserve/tutorials/pytorch-tutorial.rst
|
||||
rayserve/tutorials/sklearn-tutorial.rst
|
||||
serve/overview.rst
|
||||
serve/tutorials/tensorflow-tutorial.rst
|
||||
serve/tutorials/pytorch-tutorial.rst
|
||||
serve/tutorials/sklearn-tutorial.rst
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: -1
|
||||
|
|
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 9.5 KiB |
|
@ -1,7 +1,7 @@
|
|||
.. _rayserve:
|
||||
|
||||
RayServe: Scalable and Programmable Serving
|
||||
===========================================
|
||||
Ray Serve: Scalable and Programmable Serving
|
||||
============================================
|
||||
|
||||
.. image:: logo.svg
|
||||
:align: center
|
||||
|
@ -13,21 +13,21 @@ RayServe: Scalable and Programmable Serving
|
|||
Overview
|
||||
--------
|
||||
|
||||
RayServe is a scalable model-serving library built on Ray.
|
||||
Ray Serve is a scalable model-serving library built on Ray.
|
||||
|
||||
For users RayServe is:
|
||||
For users Ray Serve is:
|
||||
|
||||
- **Framework Agnostic**:Use the same toolkit to serve everything from deep learning models
|
||||
built with frameworks like PyTorch or TensorFlow to scikit-learn models or arbitrary business logic.
|
||||
- **Python First**: Configure your model serving with pure Python code - no more YAMLs or
|
||||
JSON configs.
|
||||
|
||||
RayServe enables:
|
||||
Ray Serve enables:
|
||||
|
||||
- **A/B test models** with zero downtime by decoupling routing logic from response handling logic.
|
||||
- **Batching** built-in to help you meet your performance objectives.
|
||||
|
||||
Since Ray is built on Ray, RayServe also allows you to **scale to many machines**
|
||||
Since Ray is built on Ray, Ray Serve also allows you to **scale to many machines**
|
||||
and allows you to leverage all of the other Ray frameworks so you can deploy and scale on any cloud.
|
||||
|
||||
.. note::
|
||||
|
@ -37,7 +37,7 @@ and allows you to leverage all of the other Ray frameworks so you can deploy and
|
|||
|
||||
Installation
|
||||
~~~~~~~~~~~~
|
||||
RayServe supports Python versions 3.5 and higher. To install RayServe:
|
||||
Ray Serve supports Python versions 3.5 and higher. To install Ray Serve:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -45,8 +45,8 @@ RayServe supports Python versions 3.5 and higher. To install RayServe:
|
|||
|
||||
|
||||
|
||||
RayServe in 90 Seconds
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
Ray Serve in 90 Seconds
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Serve a stateless function:
|
||||
|
||||
|
@ -56,10 +56,10 @@ Serve a stateful class:
|
|||
|
||||
.. literalinclude:: ../../../python/ray/serve/examples/doc/quickstart_class.py
|
||||
|
||||
See :ref:`serve-key-concepts` for more information about working with RayServe.
|
||||
See :ref:`serve-key-concepts` for more information about working with Ray Serve.
|
||||
|
||||
Why RayServe?
|
||||
~~~~~~~~~~~~~
|
||||
Why Ray Serve?
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
There are generally two ways of serving machine learning applications, both with serious limitations:
|
||||
you can build using a **traditional webserver** - your own Flask app or you can use a cloud hosted solution.
|
||||
|
@ -68,24 +68,24 @@ The first approach is easy to get started with, but it's hard to scale each comp
|
|||
requires vendor lock-in (SageMaker), framework specific tooling (TFServing), and a general
|
||||
lack of flexibility.
|
||||
|
||||
RayServe solves these problems by giving a user the ability to leverage the simplicity
|
||||
Ray Serve solves these problems by giving a user the ability to leverage the simplicity
|
||||
of deployment of a simple webserver but handles the complex routing, scaling, and testing logic
|
||||
necessary for production deployments.
|
||||
|
||||
For more on the motivation behind RayServe, check out these `meetup slides <https://tinyurl.com/serve-meetup>`_.
|
||||
For more on the motivation behind Ray Serve, check out these `meetup slides <https://tinyurl.com/serve-meetup>`_.
|
||||
|
||||
When should I use Ray Serve?
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
RayServe should be used when you need to deploy at least one model, preferrably many models.
|
||||
RayServe **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray </multiprocessing.html>`_.
|
||||
Ray Serve should be used when you need to deploy at least one model, preferrably many models.
|
||||
Ray Serve **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray </multiprocessing.html>`_.
|
||||
|
||||
.. _serve-key-concepts:
|
||||
|
||||
Key Concepts
|
||||
------------
|
||||
|
||||
RayServe focuses on **simplicity** and only has two core concepts: endpoints and backends.
|
||||
Ray Serve focuses on **simplicity** and only has two core concepts: endpoints and backends.
|
||||
|
||||
To follow along, you'll need to make the necessary imports.
|
||||
|
||||
|
@ -128,9 +128,9 @@ Once you define the function (or class) that will handle a request.
|
|||
You'd use a function when your response is stateless and a class when you
|
||||
might need to maintain some state (like a model).
|
||||
For both functions and classes (that take as input Flask Requests), you'll need to
|
||||
define them as backends to RayServe.
|
||||
define them as backends to Ray Serve.
|
||||
|
||||
It's important to note that RayServe places these backends in individual workers, which are replicas of the model.
|
||||
It's important to note that Ray Serve places these backends in individual workers, which are replicas of the model.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
@ -229,7 +229,7 @@ It's trivial to also split traffic, simply specify the endpoint and the backends
|
|||
Batching
|
||||
++++++++
|
||||
|
||||
You can also have RayServe batch requests for performance. You'll configure this in the backend config.
|
||||
You can also have Ray Serve batch requests for performance. You'll configure this in the backend config.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
@ -298,7 +298,7 @@ Other Resources
|
|||
|
||||
Frameworks
|
||||
~~~~~~~~~~
|
||||
RayServe makes it easy to deploy models from all popular frameworks.
|
||||
Ray Serve makes it easy to deploy models from all popular frameworks.
|
||||
Learn more about how to deploy your model in the following tutorials:
|
||||
|
||||
- :ref:`Tensorflow & Keras <serve-tensorflow-tutorial>`
|
|
@ -9,16 +9,16 @@ In particular, we show:
|
|||
- How to load the model from PyTorch's pre-trained modelzoo.
|
||||
- How to parse the JSON request, transform the payload and evaluated in the model.
|
||||
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe.
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
|
||||
|
||||
This tutorial requires Pytorch and Torchvision installed in your system. RayServe
|
||||
This tutorial requires Pytorch and Torchvision installed in your system. Ray Serve
|
||||
is :ref:`framework agnostic <serve_frameworks>` and work with any version of PyTorch.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install torch torchvision
|
||||
|
||||
Let's import RayServe and some other helpers.
|
||||
Let's import Ray Serve and some other helpers.
|
||||
|
||||
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_pytorch.py
|
||||
:start-after: __doc_import_begin__
|
||||
|
@ -32,7 +32,7 @@ The ``__call__`` method will be invoked per request.
|
|||
:start-after: __doc_define_servable_begin__
|
||||
:end-before: __doc_define_servable_end__
|
||||
|
||||
Now that we've defined our services, let's deploy the model to RayServe. We will
|
||||
Now that we've defined our services, let's deploy the model to Ray Serve. We will
|
||||
define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a
|
||||
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.
|
||||
|
|
@ -6,18 +6,18 @@ Scikit-Learn Tutorial
|
|||
In this guide, we will train and deploy a simple Scikit-Learn classifier.
|
||||
In particular, we show:
|
||||
|
||||
- How to load the model from file system in your RayServe definition
|
||||
- How to load the model from file system in your Ray Serve definition
|
||||
- How to parse the JSON request and evaluated in sklearn model
|
||||
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe.
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
|
||||
|
||||
RayServe supports :ref:`arbitrary frameworks <serve_frameworks>`. You can use any version of sklearn.
|
||||
Ray Serve supports :ref:`arbitrary frameworks <serve_frameworks>`. You can use any version of sklearn.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install scikit-learn
|
||||
|
||||
Let's import RayServe and some other helpers.
|
||||
Let's import Ray Serve and some other helpers.
|
||||
|
||||
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_sklearn.py
|
||||
:start-after: __doc_import_begin__
|
||||
|
@ -36,7 +36,7 @@ The ``__call__`` method will be invoked per request.
|
|||
:start-after: __doc_define_servable_begin__
|
||||
:end-before: __doc_define_servable_end__
|
||||
|
||||
Now that we've defined our services, let's deploy the model to RayServe. We will
|
||||
Now that we've defined our services, let's deploy the model to Ray Serve. We will
|
||||
define an :ref:`endpoint <serve-endpoint>` for the route representing the classifier task, a
|
||||
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.
|
||||
|
|
@ -6,12 +6,12 @@ Keras and Tensorflow Tutorial
|
|||
In this guide, we will train and deploy a simple Tensorflow neural net.
|
||||
In particular, we show:
|
||||
|
||||
- How to load the model from file system in your RayServe definition
|
||||
- How to load the model from file system in your Ray Serve definition
|
||||
- How to parse the JSON request and evaluated in Tensorflow
|
||||
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about RayServe.
|
||||
Please see the :ref:`overview <rayserve-overview>` to learn more general information about Ray Serve.
|
||||
|
||||
RayServe makes it easy to deploy models from :ref:`all popular frameworks <serve_frameworks>`.
|
||||
Ray Serve makes it easy to deploy models from :ref:`all popular frameworks <serve_frameworks>`.
|
||||
However, for this tutorial, we use Tensorflow 2 and Keras. Please make sure you have
|
||||
Tensorflow 2 installed.
|
||||
|
||||
|
@ -20,7 +20,7 @@ Tensorflow 2 installed.
|
|||
|
||||
pip install "tensorflow>=2.0"
|
||||
|
||||
Let's import RayServe and some other helpers.
|
||||
Let's import Ray Serve and some other helpers.
|
||||
|
||||
.. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_tensorflow.py
|
||||
:start-after: __doc_import_begin__
|
||||
|
@ -39,7 +39,7 @@ The ``__call__`` method will be invoked per request.
|
|||
:start-after: __doc_define_servable_begin__
|
||||
:end-before: __doc_define_servable_end__
|
||||
|
||||
Now that we've defined our services, let's deploy the model to RayServe. We will
|
||||
Now that we've defined our services, let's deploy the model to Ray Serve. We will
|
||||
define an :ref:`endpoint <serve-endpoint>` for the route representing the digit classifier task, a
|
||||
:ref:`backend <serve-backend>` correspond the physical implementation, and connect them together.
|
||||
|
Loading…
Add table
Reference in a new issue