From f8f7efc24f77e4491bc41daa7fb800cd07e524b2 Mon Sep 17 00:00:00 2001 From: Bill Chambers Date: Tue, 19 May 2020 19:13:54 -0700 Subject: [PATCH] [Serve] Rename RayServe -> "Ray Serve" in Documentation (#8504) --- doc/source/index.rst | 10 ++--- doc/source/{rayserve => serve}/logo.svg | 0 doc/source/{rayserve => serve}/overview.rst | 42 +++++++++---------- .../tutorials/pytorch-tutorial.rst | 8 ++-- .../tutorials/sklearn-tutorial.rst | 10 ++--- .../tutorials/tensorflow-tutorial.rst | 10 ++--- 6 files changed, 40 insertions(+), 40 deletions(-) rename doc/source/{rayserve => serve}/logo.svg (100%) rename doc/source/{rayserve => serve}/overview.rst (87%) rename doc/source/{rayserve => serve}/tutorials/pytorch-tutorial.rst (93%) rename doc/source/{rayserve => serve}/tutorials/sklearn-tutorial.rst (85%) rename doc/source/{rayserve => serve}/tutorials/tensorflow-tutorial.rst (86%) diff --git a/doc/source/index.rst b/doc/source/index.rst index 5c4e04fa9..9ce466a35 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -280,12 +280,12 @@ Getting Involved .. toctree:: :maxdepth: -1 - :caption: RayServe + :caption: Ray Serve - rayserve/overview.rst - rayserve/tutorials/tensorflow-tutorial.rst - rayserve/tutorials/pytorch-tutorial.rst - rayserve/tutorials/sklearn-tutorial.rst + serve/overview.rst + serve/tutorials/tensorflow-tutorial.rst + serve/tutorials/pytorch-tutorial.rst + serve/tutorials/sklearn-tutorial.rst .. toctree:: :maxdepth: -1 diff --git a/doc/source/rayserve/logo.svg b/doc/source/serve/logo.svg similarity index 100% rename from doc/source/rayserve/logo.svg rename to doc/source/serve/logo.svg diff --git a/doc/source/rayserve/overview.rst b/doc/source/serve/overview.rst similarity index 87% rename from doc/source/rayserve/overview.rst rename to doc/source/serve/overview.rst index de3d4a859..62998ed5a 100644 --- a/doc/source/rayserve/overview.rst +++ b/doc/source/serve/overview.rst @@ -1,7 +1,7 @@ .. _rayserve: -RayServe: Scalable and Programmable Serving -=========================================== +Ray Serve: Scalable and Programmable Serving +============================================ .. image:: logo.svg :align: center @@ -13,21 +13,21 @@ RayServe: Scalable and Programmable Serving Overview -------- -RayServe is a scalable model-serving library built on Ray. +Ray Serve is a scalable model-serving library built on Ray. -For users RayServe is: +For users Ray Serve is: - **Framework Agnostic**:Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or TensorFlow to scikit-learn models or arbitrary business logic. - **Python First**: Configure your model serving with pure Python code - no more YAMLs or JSON configs. -RayServe enables: +Ray Serve enables: - **A/B test models** with zero downtime by decoupling routing logic from response handling logic. - **Batching** built-in to help you meet your performance objectives. -Since Ray is built on Ray, RayServe also allows you to **scale to many machines** +Since Ray is built on Ray, Ray Serve also allows you to **scale to many machines** and allows you to leverage all of the other Ray frameworks so you can deploy and scale on any cloud. .. note:: @@ -37,7 +37,7 @@ and allows you to leverage all of the other Ray frameworks so you can deploy and Installation ~~~~~~~~~~~~ -RayServe supports Python versions 3.5 and higher. To install RayServe: +Ray Serve supports Python versions 3.5 and higher. To install Ray Serve: .. code-block:: bash @@ -45,8 +45,8 @@ RayServe supports Python versions 3.5 and higher. To install RayServe: -RayServe in 90 Seconds -~~~~~~~~~~~~~~~~~~~~~~ +Ray Serve in 90 Seconds +~~~~~~~~~~~~~~~~~~~~~~~ Serve a stateless function: @@ -56,10 +56,10 @@ Serve a stateful class: .. literalinclude:: ../../../python/ray/serve/examples/doc/quickstart_class.py -See :ref:`serve-key-concepts` for more information about working with RayServe. +See :ref:`serve-key-concepts` for more information about working with Ray Serve. -Why RayServe? -~~~~~~~~~~~~~ +Why Ray Serve? +~~~~~~~~~~~~~~ There are generally two ways of serving machine learning applications, both with serious limitations: you can build using a **traditional webserver** - your own Flask app or you can use a cloud hosted solution. @@ -68,24 +68,24 @@ The first approach is easy to get started with, but it's hard to scale each comp requires vendor lock-in (SageMaker), framework specific tooling (TFServing), and a general lack of flexibility. -RayServe solves these problems by giving a user the ability to leverage the simplicity +Ray Serve solves these problems by giving a user the ability to leverage the simplicity of deployment of a simple webserver but handles the complex routing, scaling, and testing logic necessary for production deployments. -For more on the motivation behind RayServe, check out these `meetup slides `_. +For more on the motivation behind Ray Serve, check out these `meetup slides `_. When should I use Ray Serve? ++++++++++++++++++++++++++++ -RayServe should be used when you need to deploy at least one model, preferrably many models. -RayServe **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray `_. +Ray Serve should be used when you need to deploy at least one model, preferrably many models. +Ray Serve **won't work well** when you need to run batch prediction over a dataset. Given this use case, we recommend looking into `multiprocessing with Ray `_. .. _serve-key-concepts: Key Concepts ------------ -RayServe focuses on **simplicity** and only has two core concepts: endpoints and backends. +Ray Serve focuses on **simplicity** and only has two core concepts: endpoints and backends. To follow along, you'll need to make the necessary imports. @@ -128,9 +128,9 @@ Once you define the function (or class) that will handle a request. You'd use a function when your response is stateless and a class when you might need to maintain some state (like a model). For both functions and classes (that take as input Flask Requests), you'll need to -define them as backends to RayServe. +define them as backends to Ray Serve. -It's important to note that RayServe places these backends in individual workers, which are replicas of the model. +It's important to note that Ray Serve places these backends in individual workers, which are replicas of the model. .. code-block:: python @@ -229,7 +229,7 @@ It's trivial to also split traffic, simply specify the endpoint and the backends Batching ++++++++ -You can also have RayServe batch requests for performance. You'll configure this in the backend config. +You can also have Ray Serve batch requests for performance. You'll configure this in the backend config. .. code-block:: python @@ -298,7 +298,7 @@ Other Resources Frameworks ~~~~~~~~~~ -RayServe makes it easy to deploy models from all popular frameworks. +Ray Serve makes it easy to deploy models from all popular frameworks. Learn more about how to deploy your model in the following tutorials: - :ref:`Tensorflow & Keras ` diff --git a/doc/source/rayserve/tutorials/pytorch-tutorial.rst b/doc/source/serve/tutorials/pytorch-tutorial.rst similarity index 93% rename from doc/source/rayserve/tutorials/pytorch-tutorial.rst rename to doc/source/serve/tutorials/pytorch-tutorial.rst index 0c5dfe73f..fb2d2cd2c 100644 --- a/doc/source/rayserve/tutorials/pytorch-tutorial.rst +++ b/doc/source/serve/tutorials/pytorch-tutorial.rst @@ -9,16 +9,16 @@ In particular, we show: - How to load the model from PyTorch's pre-trained modelzoo. - How to parse the JSON request, transform the payload and evaluated in the model. -Please see the :ref:`overview ` to learn more general information about RayServe. +Please see the :ref:`overview ` to learn more general information about Ray Serve. -This tutorial requires Pytorch and Torchvision installed in your system. RayServe +This tutorial requires Pytorch and Torchvision installed in your system. Ray Serve is :ref:`framework agnostic ` and work with any version of PyTorch. .. code-block:: bash pip install torch torchvision -Let's import RayServe and some other helpers. +Let's import Ray Serve and some other helpers. .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_pytorch.py :start-after: __doc_import_begin__ @@ -32,7 +32,7 @@ The ``__call__`` method will be invoked per request. :start-after: __doc_define_servable_begin__ :end-before: __doc_define_servable_end__ -Now that we've defined our services, let's deploy the model to RayServe. We will +Now that we've defined our services, let's deploy the model to Ray Serve. We will define an :ref:`endpoint ` for the route representing the digit classifier task, a :ref:`backend ` correspond the physical implementation, and connect them together. diff --git a/doc/source/rayserve/tutorials/sklearn-tutorial.rst b/doc/source/serve/tutorials/sklearn-tutorial.rst similarity index 85% rename from doc/source/rayserve/tutorials/sklearn-tutorial.rst rename to doc/source/serve/tutorials/sklearn-tutorial.rst index 3f14543f1..3e0d5f745 100644 --- a/doc/source/rayserve/tutorials/sklearn-tutorial.rst +++ b/doc/source/serve/tutorials/sklearn-tutorial.rst @@ -6,18 +6,18 @@ Scikit-Learn Tutorial In this guide, we will train and deploy a simple Scikit-Learn classifier. In particular, we show: -- How to load the model from file system in your RayServe definition +- How to load the model from file system in your Ray Serve definition - How to parse the JSON request and evaluated in sklearn model -Please see the :ref:`overview ` to learn more general information about RayServe. +Please see the :ref:`overview ` to learn more general information about Ray Serve. -RayServe supports :ref:`arbitrary frameworks `. You can use any version of sklearn. +Ray Serve supports :ref:`arbitrary frameworks `. You can use any version of sklearn. .. code-block:: bash pip install scikit-learn -Let's import RayServe and some other helpers. +Let's import Ray Serve and some other helpers. .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_sklearn.py :start-after: __doc_import_begin__ @@ -36,7 +36,7 @@ The ``__call__`` method will be invoked per request. :start-after: __doc_define_servable_begin__ :end-before: __doc_define_servable_end__ -Now that we've defined our services, let's deploy the model to RayServe. We will +Now that we've defined our services, let's deploy the model to Ray Serve. We will define an :ref:`endpoint ` for the route representing the classifier task, a :ref:`backend ` correspond the physical implementation, and connect them together. diff --git a/doc/source/rayserve/tutorials/tensorflow-tutorial.rst b/doc/source/serve/tutorials/tensorflow-tutorial.rst similarity index 86% rename from doc/source/rayserve/tutorials/tensorflow-tutorial.rst rename to doc/source/serve/tutorials/tensorflow-tutorial.rst index 7039b72b9..e63b9a1f4 100644 --- a/doc/source/rayserve/tutorials/tensorflow-tutorial.rst +++ b/doc/source/serve/tutorials/tensorflow-tutorial.rst @@ -6,12 +6,12 @@ Keras and Tensorflow Tutorial In this guide, we will train and deploy a simple Tensorflow neural net. In particular, we show: -- How to load the model from file system in your RayServe definition +- How to load the model from file system in your Ray Serve definition - How to parse the JSON request and evaluated in Tensorflow -Please see the :ref:`overview ` to learn more general information about RayServe. +Please see the :ref:`overview ` to learn more general information about Ray Serve. -RayServe makes it easy to deploy models from :ref:`all popular frameworks `. +Ray Serve makes it easy to deploy models from :ref:`all popular frameworks `. However, for this tutorial, we use Tensorflow 2 and Keras. Please make sure you have Tensorflow 2 installed. @@ -20,7 +20,7 @@ Tensorflow 2 installed. pip install "tensorflow>=2.0" -Let's import RayServe and some other helpers. +Let's import Ray Serve and some other helpers. .. literalinclude:: ../../../../python/ray/serve/examples/doc/tutorial_tensorflow.py :start-after: __doc_import_begin__ @@ -39,7 +39,7 @@ The ``__call__`` method will be invoked per request. :start-after: __doc_define_servable_begin__ :end-before: __doc_define_servable_end__ -Now that we've defined our services, let's deploy the model to RayServe. We will +Now that we've defined our services, let's deploy the model to Ray Serve. We will define an :ref:`endpoint ` for the route representing the digit classifier task, a :ref:`backend ` correspond the physical implementation, and connect them together.