From e5d089384ba0876a28e1d865ed48e6ab79ce4154 Mon Sep 17 00:00:00 2001 From: Eric Liang Date: Tue, 1 Sep 2020 09:48:35 -0700 Subject: [PATCH] [1.0] Ray whitepaper link and tagline update (#10455) --- README.rst | 15 +++++++++++---- doc/site/_config.yml | 4 ++-- doc/site/index.html | 2 +- doc/source/cluster/autoscaling.rst | 4 ++-- doc/source/cluster/launcher.rst | 2 +- doc/source/index.rst | 20 ++++++++++++-------- doc/source/ray-overview/basics.rst | 2 +- doc/source/ray-overview/involvement.rst | 2 +- doc/source/rllib-training.rst | 2 +- doc/source/serve/deployment.rst | 2 +- doc/source/whitepaper.rst | 4 ++++ java/pom.xml | 2 +- 12 files changed, 38 insertions(+), 23 deletions(-) create mode 100644 doc/source/whitepaper.rst diff --git a/README.rst b/README.rst index 4073150c7..a522d9d91 100644 --- a/README.rst +++ b/README.rst @@ -6,7 +6,7 @@ | -**Ray is a fast and simple framework for building and running distributed applications.** +**Ray provides a simple and universal API for building distributed applications.** Ray is packaged with the following libraries for accelerating machine learning workloads: @@ -261,14 +261,21 @@ More Information - `Documentation`_ - `Tutorial`_ - `Blog`_ -- `Ray paper`_ -- `Ray HotOS paper`_ +- `Ray 1.0 Architecture whitepaper`_ **(new)** - `RLlib paper`_ - `Tune paper`_ +*Older documents:* + +- `Ray paper`_ +- `Ray HotOS paper`_ +- `Blog (old)`_ + .. _`Documentation`: http://docs.ray.io/en/latest/index.html .. _`Tutorial`: https://github.com/ray-project/tutorial -.. _`Blog`: https://ray-project.github.io/ +.. _`Blog (old)`: https://ray-project.github.io/ +.. _`Blog`: https://medium.com/distributed-computing-with-ray +.. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview .. _`Ray paper`: https://arxiv.org/abs/1712.05889 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924 .. _`RLlib paper`: https://arxiv.org/abs/1712.09381 diff --git a/doc/site/_config.yml b/doc/site/_config.yml index ac5a16bf0..bc365278c 100644 --- a/doc/site/_config.yml +++ b/doc/site/_config.yml @@ -13,10 +13,10 @@ # you will see them accessed via {{ site.title }}, {{ site.email }}, and so on. # You can create any custom variable you would like, and they will be accessible # in the templates via {{ site.myvariable }}. -title: "Ray: A fast and simple framework for distributed applications" +title: "Ray: A simple and universal API for building distributed applications" email: "" description: > # this means to ignore newlines until "baseurl:" - Ray is a fast and simple framework for building and running distributed applications. + Ray provides a simple and universal API for building distributed applications. baseurl: "" # the subpath of your site, e.g. /blog url: "" # the base hostname & protocol for your site, e.g. http://example.com github_username: ray-project diff --git a/doc/site/index.html b/doc/site/index.html index 8f0ae9b01..569616f46 100644 --- a/doc/site/index.html +++ b/doc/site/index.html @@ -19,7 +19,7 @@ layout: default

- Ray is a fast and simple framework for building and running distributed applications. + Ray provides a simple and universal API for building distributed applications.

diff --git a/doc/source/cluster/autoscaling.rst b/doc/source/cluster/autoscaling.rst index 4113aa795..9a8774da8 100644 --- a/doc/source/cluster/autoscaling.rst +++ b/doc/source/cluster/autoscaling.rst @@ -39,7 +39,7 @@ The basic autoscaling config settings are as follows: Multiple Node Type Autoscaling ------------------------------ -Ray supports multiple node types in a single cluster. In this mode of operation, the scheduler will look at the queue of resource shape demands from the cluster (e.g., there might be 10 tasks queued each requesting ``{"GPU": 4, "CPU": 16}``), and tries to add the minimum set of nodes that can fulfill these resource demands. This enables precise, rapid scale up compared to looking only at resource utilization, as the autoscaler also has visiblity into the aggregate resource load. +Ray supports multiple node types in a single cluster. In this mode of operation, the scheduler will look at the queue of resource shape demands from the cluster (e.g., there might be 10 tasks queued each requesting ``{"GPU": 4, "CPU": 16}``), and tries to add the minimum set of nodes that can fulfill these resource demands. This enables precise, rapid scale up compared to looking only at resource utilization, as the autoscaler also has visibility into the queue of resource demands. The concept of a cluster node type encompasses both the physical instance type (e.g., AWS p3.8xl GPU nodes vs m4.16xl CPU nodes), as well as other attributes (e.g., IAM role, the machine image, etc). `Custom resources `__ can be specified for each node type so that Ray is aware of the demand for specific node types at the application level (e.g., a task may request to be placed on a machine with a specific role or machine image via custom resource). @@ -108,7 +108,7 @@ The ``max_workers`` field constrains the number of nodes of this type that can b max_workers: 4 -The ``worker_setup_commands`` field can be used to override the setup and initialization commands for a node type. Note that you can only override the setup for worker nodes. The head node's setup commands are always configured via the top level field in the cluster YAML: +The ``worker_setup_commands`` field (and also the ``initialization_commands`` field, not shown) can be used to override the setup and initialization commands for a node type. Note that you can only override the setup for worker nodes. The head node's setup commands are always configured via the top level field in the cluster YAML: .. code:: diff --git a/doc/source/cluster/launcher.rst b/doc/source/cluster/launcher.rst index cde1eb305..947b08ccd 100644 --- a/doc/source/cluster/launcher.rst +++ b/doc/source/cluster/launcher.rst @@ -188,7 +188,7 @@ logs in ``/tmp/ray/session_*/logs/monitor*``. $ ray monitor cluster.yaml -The Ray autoscaler also reports per-node status in the form of instance tags. In your cloud provider console, you can click on a Node, go the the "Tags" pane, and add the ``ray-node-status`` tag as a column. This lets you see per-node statuses at a glance: +The Ray autoscaler also reports per-node status in the form of instance tags. In your cloud provider console, you can click on a Node, go to the "Tags" pane, and add the ``ray-node-status`` tag as a column. This lets you see per-node statuses at a glance: .. image:: /images/autoscaler-status.png diff --git a/doc/source/index.rst b/doc/source/index.rst index ccbcc4d26..f0150f41d 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -99,22 +99,24 @@ Slides - `Talk given in October 2019 `_ - [Tune] `Talk given at RISECamp 2019 `_ -Academic Papers ---------------- +Papers +------ -- `Ray paper`_ -- `Ray HotOS paper`_ +- `Ray 1.0 Architecture whitepaper`_ **(new)** - `RLlib paper`_ - `Tune paper`_ +*Older papers:* + +- `Ray paper`_ +- `Ray HotOS paper`_ + +.. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview .. _`Ray paper`: https://arxiv.org/abs/1712.05889 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924 .. _`RLlib paper`: https://arxiv.org/abs/1712.09381 .. _`Tune paper`: https://arxiv.org/abs/1807.05118 - - - .. toctree:: :hidden: :maxdepth: -1 @@ -146,6 +148,7 @@ Academic Papers cluster/deploy.rst .. toctree:: + :hidden: :maxdepth: -1 :caption: Ray Serve @@ -201,7 +204,7 @@ Academic Papers .. toctree:: :hidden: :maxdepth: -1 - :caption: Other Libraries + :caption: Community Libraries multiprocessing.rst joblib.rst @@ -229,6 +232,7 @@ Academic Papers :caption: Development and Ray Internals development.rst + whitepaper.rst debugging.rst profiling.rst fault-tolerance.rst diff --git a/doc/source/ray-overview/basics.rst b/doc/source/ray-overview/basics.rst index b9148149b..74015d52a 100644 --- a/doc/source/ray-overview/basics.rst +++ b/doc/source/ray-overview/basics.rst @@ -7,7 +7,7 @@ .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png -**Ray is a fast and simple framework for building and running distributed applications.** +**Ray provides a simple and universal API for building distributed applications.** Ray accomplishes this mission by: diff --git a/doc/source/ray-overview/involvement.rst b/doc/source/ray-overview/involvement.rst index df8c7d715..1020debd5 100644 --- a/doc/source/ray-overview/involvement.rst +++ b/doc/source/ray-overview/involvement.rst @@ -14,4 +14,4 @@ researchers, and folks that love machine learning. Here's a list of tips for get .. _`Pull Requests`: https://github.com/ray-project/ray/pulls .. _`Twitter`: https://twitter.com/raydistributed .. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/ -.. _`on GitHub`: https://github.com/ray-project/ray \ No newline at end of file +.. _`on GitHub`: https://github.com/ray-project/ray diff --git a/doc/source/rllib-training.rst b/doc/source/rllib-training.rst index 991a19df2..b5e60a3ff 100644 --- a/doc/source/rllib-training.rst +++ b/doc/source/rllib-training.rst @@ -912,7 +912,7 @@ Using PyTorch ~~~~~~~~~~~~~ Trainers that have an implemented TorchPolicy, will allow you to run -`rllib train` using the the command line ``--torch`` flag. +`rllib train` using the command line ``--torch`` flag. Algorithms that do not have a torch version yet will complain with an error in this case. diff --git a/doc/source/serve/deployment.rst b/doc/source/serve/deployment.rst index 223d11a7a..b62684bc5 100644 --- a/doc/source/serve/deployment.rst +++ b/doc/source/serve/deployment.rst @@ -297,7 +297,7 @@ You can run multiple serve instances on the same Ray cluster by providing a ``na serve.create_backend("backend2", function) serve.create_endpoint("endpoint2", backend="backend2", route="/increment") - # Switch back the the first cluster and create the same backend on it. + # Switch back to the first cluster and create the same backend on it. serve.init(name="cluster1") serve.create_backend("backend1", function) serve.create_endpoint("endpoint1", backend="backend1", route="/increment") diff --git a/doc/source/whitepaper.rst b/doc/source/whitepaper.rst new file mode 100644 index 000000000..7cf348b65 --- /dev/null +++ b/doc/source/whitepaper.rst @@ -0,0 +1,4 @@ +Ray Whitepaper +============== + +For an in-depth overview of Ray internals, check out the `Ray 1.0 Architecture whitepaper `__. diff --git a/java/pom.xml b/java/pom.xml index ac722f9dc..10c5ac9b9 100644 --- a/java/pom.xml +++ b/java/pom.xml @@ -8,7 +8,7 @@ 0.9.0-SNAPSHOT pom Ray Project Parent POM - A fast and simple framework for building and running distributed applications. + A simple and universal API for building distributed applications. https://github.com/ray-project/ray