mirror of
https://github.com/vale981/ray
synced 2025-03-05 10:01:43 -05:00
[1.0] Ray whitepaper link and tagline update (#10455)
This commit is contained in:
parent
9c2c952262
commit
e5d089384b
12 changed files with 38 additions and 23 deletions
15
README.rst
15
README.rst
|
@ -6,7 +6,7 @@
|
|||
|
|
||||
|
||||
|
||||
**Ray is a fast and simple framework for building and running distributed applications.**
|
||||
**Ray provides a simple and universal API for building distributed applications.**
|
||||
|
||||
Ray is packaged with the following libraries for accelerating machine learning workloads:
|
||||
|
||||
|
@ -261,14 +261,21 @@ More Information
|
|||
- `Documentation`_
|
||||
- `Tutorial`_
|
||||
- `Blog`_
|
||||
- `Ray paper`_
|
||||
- `Ray HotOS paper`_
|
||||
- `Ray 1.0 Architecture whitepaper`_ **(new)**
|
||||
- `RLlib paper`_
|
||||
- `Tune paper`_
|
||||
|
||||
*Older documents:*
|
||||
|
||||
- `Ray paper`_
|
||||
- `Ray HotOS paper`_
|
||||
- `Blog (old)`_
|
||||
|
||||
.. _`Documentation`: http://docs.ray.io/en/latest/index.html
|
||||
.. _`Tutorial`: https://github.com/ray-project/tutorial
|
||||
.. _`Blog`: https://ray-project.github.io/
|
||||
.. _`Blog (old)`: https://ray-project.github.io/
|
||||
.. _`Blog`: https://medium.com/distributed-computing-with-ray
|
||||
.. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
|
||||
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
|
||||
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
|
||||
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
|
||||
|
|
|
@ -13,10 +13,10 @@
|
|||
# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
|
||||
# You can create any custom variable you would like, and they will be accessible
|
||||
# in the templates via {{ site.myvariable }}.
|
||||
title: "Ray: A fast and simple framework for distributed applications"
|
||||
title: "Ray: A simple and universal API for building distributed applications"
|
||||
email: ""
|
||||
description: > # this means to ignore newlines until "baseurl:"
|
||||
Ray is a fast and simple framework for building and running distributed applications.
|
||||
Ray provides a simple and universal API for building distributed applications.
|
||||
baseurl: "" # the subpath of your site, e.g. /blog
|
||||
url: "" # the base hostname & protocol for your site, e.g. http://example.com
|
||||
github_username: ray-project
|
||||
|
|
|
@ -19,7 +19,7 @@ layout: default
|
|||
</p>
|
||||
|
||||
<p>
|
||||
<b>Ray is a fast and simple framework for building and running distributed applications.</b>
|
||||
<b>Ray provides a simple and universal API for building distributed applications.</b>
|
||||
</p>
|
||||
|
||||
<p>
|
||||
|
|
|
@ -39,7 +39,7 @@ The basic autoscaling config settings are as follows:
|
|||
Multiple Node Type Autoscaling
|
||||
------------------------------
|
||||
|
||||
Ray supports multiple node types in a single cluster. In this mode of operation, the scheduler will look at the queue of resource shape demands from the cluster (e.g., there might be 10 tasks queued each requesting ``{"GPU": 4, "CPU": 16}``), and tries to add the minimum set of nodes that can fulfill these resource demands. This enables precise, rapid scale up compared to looking only at resource utilization, as the autoscaler also has visiblity into the aggregate resource load.
|
||||
Ray supports multiple node types in a single cluster. In this mode of operation, the scheduler will look at the queue of resource shape demands from the cluster (e.g., there might be 10 tasks queued each requesting ``{"GPU": 4, "CPU": 16}``), and tries to add the minimum set of nodes that can fulfill these resource demands. This enables precise, rapid scale up compared to looking only at resource utilization, as the autoscaler also has visibility into the queue of resource demands.
|
||||
|
||||
The concept of a cluster node type encompasses both the physical instance type (e.g., AWS p3.8xl GPU nodes vs m4.16xl CPU nodes), as well as other attributes (e.g., IAM role, the machine image, etc). `Custom resources <configure.html>`__ can be specified for each node type so that Ray is aware of the demand for specific node types at the application level (e.g., a task may request to be placed on a machine with a specific role or machine image via custom resource).
|
||||
|
||||
|
@ -108,7 +108,7 @@ The ``max_workers`` field constrains the number of nodes of this type that can b
|
|||
|
||||
max_workers: 4
|
||||
|
||||
The ``worker_setup_commands`` field can be used to override the setup and initialization commands for a node type. Note that you can only override the setup for worker nodes. The head node's setup commands are always configured via the top level field in the cluster YAML:
|
||||
The ``worker_setup_commands`` field (and also the ``initialization_commands`` field, not shown) can be used to override the setup and initialization commands for a node type. Note that you can only override the setup for worker nodes. The head node's setup commands are always configured via the top level field in the cluster YAML:
|
||||
|
||||
.. code::
|
||||
|
||||
|
|
|
@ -188,7 +188,7 @@ logs in ``/tmp/ray/session_*/logs/monitor*``.
|
|||
|
||||
$ ray monitor cluster.yaml
|
||||
|
||||
The Ray autoscaler also reports per-node status in the form of instance tags. In your cloud provider console, you can click on a Node, go the the "Tags" pane, and add the ``ray-node-status`` tag as a column. This lets you see per-node statuses at a glance:
|
||||
The Ray autoscaler also reports per-node status in the form of instance tags. In your cloud provider console, you can click on a Node, go to the "Tags" pane, and add the ``ray-node-status`` tag as a column. This lets you see per-node statuses at a glance:
|
||||
|
||||
.. image:: /images/autoscaler-status.png
|
||||
|
||||
|
|
|
@ -99,22 +99,24 @@ Slides
|
|||
- `Talk given in October 2019 <https://docs.google.com/presentation/d/13K0JsogYQX3gUCGhmQ1PQ8HILwEDFysnq0cI2b88XbU/edit?usp=sharing>`_
|
||||
- [Tune] `Talk given at RISECamp 2019 <https://docs.google.com/presentation/d/1v3IldXWrFNMK-vuONlSdEuM82fuGTrNUDuwtfx4axsQ/edit?usp=sharing>`_
|
||||
|
||||
Academic Papers
|
||||
---------------
|
||||
Papers
|
||||
------
|
||||
|
||||
- `Ray paper`_
|
||||
- `Ray HotOS paper`_
|
||||
- `Ray 1.0 Architecture whitepaper`_ **(new)**
|
||||
- `RLlib paper`_
|
||||
- `Tune paper`_
|
||||
|
||||
*Older papers:*
|
||||
|
||||
- `Ray paper`_
|
||||
- `Ray HotOS paper`_
|
||||
|
||||
.. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
|
||||
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
|
||||
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
|
||||
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
|
||||
.. _`Tune paper`: https://arxiv.org/abs/1807.05118
|
||||
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: -1
|
||||
|
@ -146,6 +148,7 @@ Academic Papers
|
|||
cluster/deploy.rst
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: -1
|
||||
:caption: Ray Serve
|
||||
|
||||
|
@ -201,7 +204,7 @@ Academic Papers
|
|||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: -1
|
||||
:caption: Other Libraries
|
||||
:caption: Community Libraries
|
||||
|
||||
multiprocessing.rst
|
||||
joblib.rst
|
||||
|
@ -229,6 +232,7 @@ Academic Papers
|
|||
:caption: Development and Ray Internals
|
||||
|
||||
development.rst
|
||||
whitepaper.rst
|
||||
debugging.rst
|
||||
profiling.rst
|
||||
fault-tolerance.rst
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
|
||||
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
|
||||
|
||||
**Ray is a fast and simple framework for building and running distributed applications.**
|
||||
**Ray provides a simple and universal API for building distributed applications.**
|
||||
|
||||
Ray accomplishes this mission by:
|
||||
|
||||
|
|
|
@ -14,4 +14,4 @@ researchers, and folks that love machine learning. Here's a list of tips for get
|
|||
.. _`Pull Requests`: https://github.com/ray-project/ray/pulls
|
||||
.. _`Twitter`: https://twitter.com/raydistributed
|
||||
.. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
|
||||
.. _`on GitHub`: https://github.com/ray-project/ray
|
||||
.. _`on GitHub`: https://github.com/ray-project/ray
|
||||
|
|
|
@ -912,7 +912,7 @@ Using PyTorch
|
|||
~~~~~~~~~~~~~
|
||||
|
||||
Trainers that have an implemented TorchPolicy, will allow you to run
|
||||
`rllib train` using the the command line ``--torch`` flag.
|
||||
`rllib train` using the command line ``--torch`` flag.
|
||||
Algorithms that do not have a torch version yet will complain with an error in
|
||||
this case.
|
||||
|
||||
|
|
|
@ -297,7 +297,7 @@ You can run multiple serve instances on the same Ray cluster by providing a ``na
|
|||
serve.create_backend("backend2", function)
|
||||
serve.create_endpoint("endpoint2", backend="backend2", route="/increment")
|
||||
|
||||
# Switch back the the first cluster and create the same backend on it.
|
||||
# Switch back to the first cluster and create the same backend on it.
|
||||
serve.init(name="cluster1")
|
||||
serve.create_backend("backend1", function)
|
||||
serve.create_endpoint("endpoint1", backend="backend1", route="/increment")
|
||||
|
|
4
doc/source/whitepaper.rst
Normal file
4
doc/source/whitepaper.rst
Normal file
|
@ -0,0 +1,4 @@
|
|||
Ray Whitepaper
|
||||
==============
|
||||
|
||||
For an in-depth overview of Ray internals, check out the `Ray 1.0 Architecture whitepaper <https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview>`__.
|
|
@ -8,7 +8,7 @@
|
|||
<version>0.9.0-SNAPSHOT</version>
|
||||
<packaging>pom</packaging>
|
||||
<name>Ray Project Parent POM</name>
|
||||
<description>A fast and simple framework for building and running distributed applications.
|
||||
<description>A simple and universal API for building distributed applications.
|
||||
</description>
|
||||
<url>https://github.com/ray-project/ray</url>
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue