No description
Find a file
mehrdadn 37942ea1e7
Windows cleanup (#9508)
* Remove unneeded code for Windows

* Get rid of usleep()

* Make platform_shims includes non-transitive

Co-authored-by: Mehrdad <noreply@github.com>
2020-07-17 02:08:15 -07:00
.github Make more tests compatible with Windows (#9303) 2020-07-15 11:34:33 -05:00
bazel Windows cleanup (#9508) 2020-07-17 02:08:15 -07:00
ci Fix pip and Bazel interaction messing up CI (#9506) 2020-07-16 16:28:37 -07:00
cpp [core] Refactor task arguments and attach owner address (#9152) 2020-07-06 21:25:14 -07:00
deploy/ray-operator Customize service account name. (#8901) 2020-06-16 12:49:41 -05:00
doc [tune] extend PTL template (GPU, typing fixes, tensorboard) (#9451) 2020-07-15 10:30:20 -07:00
docker GCP authentication using oauth tokens (#9279) 2020-07-13 14:36:40 -07:00
java fix java createActor NPE bug (#9532) 2020-07-17 11:01:51 +08:00
python Hotfix Lint for Serve (#9535) 2020-07-17 00:40:33 -07:00
rllib [RLlib] Issue 9218: PyTorch Policy places Model on GPU even with num_gpus=0 (#9516) 2020-07-17 05:53:25 +02:00
src Windows cleanup (#9508) 2020-07-17 02:08:15 -07:00
streaming Windows cleanup (#9508) 2020-07-17 02:08:15 -07:00
thirdparty Update hiredis and remove Windows patches (#9289) 2020-07-09 18:45:44 -07:00
.bazelrc [GCS Actor Management] Gcs actor management broken detached actor (#9473) 2020-07-16 15:41:18 +08:00
.clang-format Remove legacy Ray code. (#3121) 2018-10-26 13:36:58 -07:00
.editorconfig Improve .editorconfig entries (#7344) 2020-02-26 19:05:36 -08:00
.gitignore Updated gitignore for tags and emacs (#8809) 2020-06-05 17:09:18 -07:00
.style.yapf YAPF, take 3 (#2098) 2018-05-19 16:07:28 -07:00
.travis.yml TRAVIS_PULL_REQUEST is false for non-PRs, not empty (#9439) 2020-07-13 14:52:40 +02:00
build-docker.sh Find bazel even if it isn't in the PATH. (#4729) 2019-05-01 21:29:48 -07:00
BUILD.bazel Windows cleanup (#9508) 2020-07-17 02:08:15 -07:00
build.sh Get rid of build shell scripts and move them to Python (#6082) 2020-07-16 11:26:47 -05:00
CONTRIBUTING.rst Add linting pre-push hook (#5154) 2019-07-09 21:49:12 -07:00
LICENSE [rllib] add augmented random search (#2714) 2018-08-24 22:20:02 -07:00
pylintrc RLLIB and pylintrc (#8995) 2020-06-17 18:14:25 +02:00
README.rst [tune] Fix github readme (#9365) 2020-07-09 12:37:24 -07:00
scripts Add scripts symlink back (#9219) (#9475) 2020-07-14 12:31:49 -07:00
setup_hooks.sh Make sure pre-push is executable. (#7079) 2020-02-07 11:38:14 -08:00
WORKSPACE Use GRCP and Bazel 1.0 (#6002) 2019-11-08 15:58:28 -08:00

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

.. image:: https://travis-ci.com/ray-project/ray.svg?branch=master
    :target: https://travis-ci.com/ray-project/ray

.. image:: https://readthedocs.org/projects/ray/badge/?version=latest
    :target: http://docs.ray.io/en/latest/?badge=latest

|


**Ray is a fast and simple framework for building and running distributed applications.**

Ray is packaged with the following libraries for accelerating machine learning workloads:

- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `RaySGD <https://docs.ray.io/en/latest/raysgd/raysgd.html>`__: Distributed Training Wrappers

Install Ray with: ``pip install ray``. For nightly wheels, see the
`Installation page <https://docs.ray.io/en/latest/installation.html>`__.

**NOTE:** As of Ray 0.8.1, Python 2 is no longer supported.

Quick Start
-----------

Execute Python functions in parallel.

.. code-block:: python

    import ray
    ray.init()

    @ray.remote
    def f(x):
        return x * x

    futures = [f.remote(i) for i in range(4)]
    print(ray.get(futures))

To use Ray's actor model:

.. code-block:: python


    import ray
    ray.init()

    @ray.remote
    class Counter(object):
        def __init__(self):
            self.n = 0

        def increment(self):
            self.n += 1

        def read(self):
            return self.n

    counters = [Counter.remote() for i in range(4)]
    [c.increment.remote() for c in counters]
    futures = [c.read.remote() for c in counters]
    print(ray.get(futures))


Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml>`__, and run:

``ray submit [CLUSTER.YAML] example.py --start``

Read more about `launching clusters <https://docs.ray.io/en/latest/autoscaling.html>`_.

Tune Quick Start
----------------

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

`Tune`_ is a library for hyperparameter tuning at any scale.

- Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
- Supports any deep learning framework, including PyTorch, TensorFlow, and Keras.
- Visualize results with `TensorBoard <https://www.tensorflow.org/get_started/summaries_and_tensorboard>`__.
- Choose among scalable SOTA algorithms such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_.
- Tune integrates with many optimization libraries such as `Facebook Ax <http://ax.dev>`_, `HyperOpt <https://github.com/hyperopt/hyperopt>`_, and `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_ and enables you to scale them transparently.

To run this example, you will need to install the following:

.. code-block:: bash

    $ pip install ray[tune]


This example runs a parallel grid search to optimize an example objective function.

.. code-block:: python


    from ray import tune


    def objective(step, alpha, beta):
        return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


    def training_function(config):
        # Hyperparameters
        alpha, beta = config["alpha"], config["beta"]
        for step in range(10):
            # Iterative training function - can be any arbitrary training procedure.
            intermediate_score = objective(step, alpha, beta)
            # Feed the score back back to Tune.
            tune.report(mean_loss=intermediate_score)


    analysis = tune.run(
        training_function,
        config={
            "alpha": tune.grid_search([0.001, 0.01, 0.1]),
            "beta": tune.choice([1, 2, 3])
        })

    print("Best config: ", analysis.get_best_config(metric="mean_loss"))

    # Get a dataframe for analyzing trial results.
    df = analysis.dataframe()

If TensorBoard is installed, automatically visualize all trial results:

.. code-block:: bash

    tensorboard --logdir ~/ray_results

.. _`Tune`: https://docs.ray.io/en/latest/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/latest/tune-schedulers.html#population-based-training-pbt
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/latest/tune-schedulers.html#median-stopping-rule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/latest/tune-schedulers.html#asynchronous-hyperband

RLlib Quick Start
-----------------

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg

`RLlib`_ is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.

.. code-block:: bash

  pip install tensorflow  # or tensorflow-gpu
  pip install ray[rllib]  # also recommended: ray[debug]

.. code-block:: python

    import gym
    from gym.spaces import Discrete, Box
    from ray import tune

    class SimpleCorridor(gym.Env):
        def __init__(self, config):
            self.end_pos = config["corridor_length"]
            self.cur_pos = 0
            self.action_space = Discrete(2)
            self.observation_space = Box(0.0, self.end_pos, shape=(1, ))

        def reset(self):
            self.cur_pos = 0
            return [self.cur_pos]

        def step(self, action):
            if action == 0 and self.cur_pos > 0:
                self.cur_pos -= 1
            elif action == 1:
                self.cur_pos += 1
            done = self.cur_pos >= self.end_pos
            return [self.cur_pos], 1 if done else 0, done, {}

    tune.run(
        "PPO",
        config={
            "env": SimpleCorridor,
            "num_workers": 4,
            "env_config": {"corridor_length": 5}})

.. _`RLlib`: https://docs.ray.io/en/latest/rllib.html


More Information
----------------

- `Documentation`_
- `Tutorial`_
- `Blog`_
- `Ray paper`_
- `Ray HotOS paper`_
- `RLlib paper`_
- `Tune paper`_

.. _`Documentation`: http://docs.ray.io/en/latest/index.html
.. _`Tutorial`: https://github.com/ray-project/tutorial
.. _`Blog`: https://ray-project.github.io/
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
.. _`Tune paper`: https://arxiv.org/abs/1807.05118

Getting Involved
----------------

- `ray-dev@googlegroups.com`_: For discussions about development or any general
  questions.
- `StackOverflow`_: For questions about how to use Ray.
- `GitHub Issues`_: For reporting bugs and feature requests.
- `Pull Requests`_: For submitting code contributions.
- `Meetup Group`_: Join our meetup group.
- `Community Slack`_: Join our Slack workspace.
- `Twitter`_: Follow updates on Twitter.

.. _`ray-dev@googlegroups.com`: https://groups.google.com/forum/#!forum/ray-dev
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
.. _`Pull Requests`: https://github.com/ray-project/ray/pulls
.. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
.. _`Community Slack`: https://forms.gle/9TSdDYUgxYs8SA9e8
.. _`Twitter`: https://twitter.com/raydistributed