2017-02-27 21:14:31 -08:00
Ray
===
2018-01-19 10:14:34 -08:00
.. raw :: html
<embed>
<a href="https://github.com/ray-project/ray"><img style="position: absolute; top: 0; right: 0; border: 0;" src="https://camo.githubusercontent.com/365986a132ccd6a44c23a9169022c0b5c890c387/68747470733a2f2f73332e616d617a6f6e6177732e636f6d2f6769746875622f726962626f6e732f666f726b6d655f72696768745f7265645f6161303030302e706e67" alt="Fork me on GitHub" data-canonical-src="https://s3.amazonaws.com/github/ribbons/forkme_right_red_aa0000.png"></a>
</embed>
2019-05-16 22:34:14 -07:00
*Ray is a fast and simple framework for building and running distributed applications.*
2017-02-27 21:14:31 -08:00
2019-08-05 23:33:14 -07:00
Ray comes with libraries that accelerate deep learning and reinforcement learning development:
2018-01-19 10:14:34 -08:00
2019-08-05 23:33:14 -07:00
- `Tune`_ : Scalable Hyperparameter Search
- `RLlib`_ : Scalable Reinforcement Learning
- `Distributed Training <distributed_training.html> `__
2017-09-30 15:37:28 -07:00
2019-08-05 23:33:14 -07:00
Install Ray with: `` pip install ray `` . For nightly wheels, see the `Installation page <installation.html> `__ .
2018-03-08 09:18:09 -08:00
View the `codebase on GitHub`_ .
.. _`codebase on GitHub`: https://github.com/ray-project/ray
2019-08-05 23:33:14 -07:00
Quick Start
-----------
.. code-block :: python
ray.init()
@ray.remote
def f(x):
return x * x
futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))
To use Ray's actor model:
.. code-block :: python
ray.init()
@ray.remote
class Counter():
def __init__(self):
self.n = 0
2019-08-12 11:16:16 +03:00
def increment(self):
2019-08-05 23:33:14 -07:00
self.n += 1
def read(self):
return self.n
counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))
Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml> `__ , and run:
`` ray submit [CLUSTER.YAML] example.py --start ``
See more details in the `Cluster Launch page <autoscaling.html> `_ .
Tune Quick Start
----------------
`Tune`_ is a scalable framework for hyperparameter search built on top of Ray with a focus on deep learning and deep reinforcement learning.
2019-08-06 08:46:59 -07:00
.. note ::
To run this example, you will need to install the following:
.. code-block :: bash
$ pip install ray torch torchvision filelock
2019-08-05 23:33:14 -07:00
2019-08-06 08:46:59 -07:00
This example runs a small grid search to train a CNN using PyTorch and Tune.
2019-08-05 23:33:14 -07:00
2019-08-06 08:46:59 -07:00
.. literalinclude :: ../../python/ray/tune/tests/example.py
:language: python
:start-after: __quick_start_begin__
:end-before: __quick_start_end__
If TensorBoard is installed, automatically visualize all trial results:
.. code-block :: bash
2019-08-05 23:33:14 -07:00
2019-08-06 08:46:59 -07:00
tensorboard --logdir ~/ray_results
2018-03-08 09:18:09 -08:00
2018-08-19 11:00:55 -07:00
.. _`Tune`: tune.html
2019-08-05 23:33:14 -07:00
RLlib Quick Start
-----------------
`RLlib`_ is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.
.. code-block :: bash
pip install tensorflow # or tensorflow-gpu
pip install ray[rllib] # also recommended: ray[debug]
.. code-block :: python
import gym
from gym.spaces import Discrete, Box
from ray import tune
class SimpleCorridor(gym.Env):
def __init__(self, config):
self.end_pos = config["corridor_length"]
self.cur_pos = 0
self.action_space = Discrete(2)
self.observation_space = Box(0.0, self.end_pos, shape=(1, ))
def reset(self):
self.cur_pos = 0
return [self.cur_pos]
def step(self, action):
if action == 0 and self.cur_pos > 0:
self.cur_pos -= 1
elif action == 1:
self.cur_pos += 1
done = self.cur_pos >= self.end_pos
return [self.cur_pos], 1 if done else 0, done, {}
tune.run(
"PPO",
config={
"env": SimpleCorridor,
"num_workers": 4,
"env_config": {"corridor_length": 5}})
2018-08-19 11:00:55 -07:00
.. _`RLlib`: rllib.html
2018-03-08 09:18:09 -08:00
2019-08-05 23:33:14 -07:00
Contact
-------
The following are good places to discuss Ray.
1. `ray-dev@googlegroups.com`_ : For discussions about development or any general
questions.
2. `StackOverflow`_ : For questions about how to use Ray.
3. `GitHub Issues`_ : For bug reports and feature requests.
.. _`ray-dev@googlegroups.com`: https://groups.google.com/forum/#!forum/ray-dev
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
2018-03-08 09:18:09 -08:00
2017-02-27 21:14:31 -08:00
.. toctree ::
2017-02-28 18:57:51 -08:00
:maxdepth: 1
2017-02-27 21:14:31 -08:00
:caption: Installation
2018-03-12 00:52:00 -07:00
installation.rst
2017-02-27 21:14:31 -08:00
2017-03-04 23:06:02 -08:00
.. toctree ::
:maxdepth: 1
2019-08-05 23:33:14 -07:00
:caption: Using Ray
2017-03-04 23:06:02 -08:00
2019-08-05 23:33:14 -07:00
walkthrough.rst
2017-03-17 16:48:25 -07:00
actors.rst
2017-06-08 00:12:44 -07:00
using-ray-with-gpus.rst
2019-08-05 23:33:14 -07:00
user-profiling.rst
inspect.rst
configure.rst
advanced.rst
troubleshooting.rst
package-ref.rst
2018-03-08 09:18:09 -08:00
2018-12-12 10:40:54 -08:00
.. toctree ::
:maxdepth: 1
2019-08-05 23:33:14 -07:00
:caption: Cluster Setup
2018-12-12 10:40:54 -08:00
autoscaling.rst
using-ray-on-a-cluster.rst
2019-08-05 23:33:14 -07:00
deploy-on-kubernetes.rst
2018-12-12 10:40:54 -08:00
2018-03-08 09:18:09 -08:00
.. toctree ::
:maxdepth: 1
2018-08-19 11:00:55 -07:00
:caption: Tune
2018-03-08 09:18:09 -08:00
2017-11-23 11:31:59 -08:00
tune.rst
2019-08-02 09:17:20 -07:00
tune-tutorial.rst
2018-08-19 11:00:55 -07:00
tune-usage.rst
2019-08-02 09:17:20 -07:00
tune-distributed.rst
2018-08-19 11:00:55 -07:00
tune-schedulers.rst
tune-searchalg.rst
tune-package-ref.rst
2019-05-05 00:04:13 -07:00
tune-design.rst
2018-11-08 23:45:05 -08:00
tune-examples.rst
2019-05-05 00:04:13 -07:00
tune-contrib.rst
2018-03-08 09:18:09 -08:00
.. toctree ::
:maxdepth: 1
2018-10-01 12:49:39 -07:00
:caption: RLlib
2018-03-08 09:18:09 -08:00
2017-12-06 18:17:51 -08:00
rllib.rst
2018-07-01 00:05:08 -07:00
rllib-training.rst
rllib-env.rst
rllib-models.rst
2019-04-07 00:36:18 -07:00
rllib-algorithms.rst
2019-01-03 15:15:36 +08:00
rllib-offline.rst
2018-07-08 18:46:52 -07:00
rllib-concepts.rst
2019-01-29 21:06:09 -08:00
rllib-examples.rst
2019-05-27 14:17:32 -07:00
rllib-dev.rst
rllib-package-ref.rst
2017-03-04 23:06:02 -08:00
2018-03-13 22:23:50 -07:00
.. toctree ::
:maxdepth: 1
2019-08-05 23:33:14 -07:00
:caption: Experimental
2018-03-13 22:23:50 -07:00
2019-06-01 21:39:22 -07:00
distributed_training.rst
2018-03-13 22:23:50 -07:00
pandas_on_ray.rst
2019-08-05 23:33:14 -07:00
signals.rst
async_api.rst
2018-03-13 22:23:50 -07:00
2017-02-27 21:14:31 -08:00
.. toctree ::
2017-02-28 18:57:51 -08:00
:maxdepth: 1
2017-02-27 21:14:31 -08:00
:caption: Examples
2017-03-11 21:16:36 -08:00
example-rl-pong.rst
2017-11-08 23:40:51 -08:00
example-parameter-server.rst
2019-03-20 18:47:12 -07:00
example-newsreader.rst
2017-03-07 01:07:32 -08:00
example-resnet.rst
2017-03-11 00:57:53 -08:00
example-a3c.rst
2017-03-11 15:30:31 -08:00
example-lbfgs.rst
2017-11-27 21:38:35 -08:00
example-streaming.rst
2017-03-11 15:30:31 -08:00
using-ray-with-tensorflow.rst
2017-02-27 21:14:31 -08:00
.. toctree ::
:maxdepth: 1
2019-08-05 23:33:14 -07:00
:caption: Development and Internals
2017-05-22 15:20:20 -07:00
2019-08-05 23:33:14 -07:00
install-source.rst
2018-01-19 16:16:45 -08:00
development.rst
2018-01-25 21:40:52 -08:00
profiling.rst
2019-08-05 23:33:14 -07:00
internals-overview.rst
fault-tolerance.rst
contrib.rst