No description
Find a file
Dmitri Gekhtman f51566e622
Prep K8s operator for the Ray 1.11.0 release. (#22264)
For consistency and safety, we fix an explicit 6379 port for all default and example configs for Ray on K8s.
Documentation is updated to recommend matching Ray versions in operator and Ray cluster.
2022-02-09 18:59:50 -08:00
.buildkite [Ray Usage Stats] Record cluster metadata + Refactoring. (#22170) 2022-02-08 22:12:36 -08:00
.github Also allow auto-closing of stale PRs (#22149) 2022-02-07 14:34:59 -08:00
.gitpod [CI] Add support for Black formatting (#21281) 2022-01-03 10:06:41 -08:00
bazel [Streaming]Farewell : remove all of streaming related from ray repo. (#21770) 2022-01-23 17:53:41 +08:00
benchmarks [CI] Format Python code with Black (#21975) 2022-01-29 18:41:57 -08:00
ci [CI] Remove YAPF from format.sh (#21986) 2022-02-07 16:05:27 -08:00
cpp [core] Increment ref count when creating an ObjectRef to prevent object from going out of scope (#22120) 2022-02-08 14:50:50 -08:00
dashboard [Jobs] Hide dashboard from Job Submission import path (#22223) 2022-02-09 13:55:32 -06:00
deploy Prep K8s operator for the Ray 1.11.0 release. (#22264) 2022-02-09 18:59:50 -08:00
doc [RLlib] Discussion 2022: Fix batch_mode="complete_episodes" documentation inaccuracy. (#22074) 2022-02-10 02:57:27 +01:00
docker [Docker] Update echo in fix-docker-latest.sh (#22123) 2022-02-07 08:50:36 -08:00
java [core] Increment ref count when creating an ObjectRef to prevent object from going out of scope (#22120) 2022-02-08 14:50:50 -08:00
python Prep K8s operator for the Ray 1.11.0 release. (#22264) 2022-02-09 18:59:50 -08:00
release Fix a bug from many drivers. (#22248) 2022-02-09 15:17:15 -08:00
rllib [RLlib] Fix Unity3D built-in examples action bounds from -inf/inf to -1.0/1.0. (#22247) 2022-02-10 03:00:30 +01:00
src [core] Recover spilled objects that are lost during node failure (#21485) 2022-02-09 18:22:16 -08:00
thirdparty [Core] upgrade grpc and boringssl (#20919) 2021-12-07 14:35:46 -08:00
.bazelrc [Build] Remove debug info from Ray libraries. (#20389) 2021-11-15 16:40:48 -08:00
.clang-format Remove legacy Ray code. (#3121) 2018-10-26 13:36:58 -07:00
.clang-tidy [Lint] Disable modernize-use-override (#19368) 2021-10-13 20:20:08 -07:00
.editorconfig Improve .editorconfig entries (#7344) 2020-02-26 19:05:36 -08:00
.flake8 [Streaming]Farewell : remove all of streaming related from ray repo. (#21770) 2022-01-23 17:53:41 +08:00
.gitignore [Streaming]Farewell : remove all of streaming related from ray repo. (#21770) 2022-01-23 17:53:41 +08:00
.gitpod.yml [dev] Enable gitpod (#15420) 2021-04-21 13:26:46 -07:00
build-docker.sh [Docker] Add --base-image argument to build-docker.sh (#17574) 2021-08-13 13:29:33 -07:00
BUILD.bazel [Streaming]Farewell : remove all of streaming related from ray repo. (#21770) 2022-01-23 17:53:41 +08:00
build.sh Get rid of build shell scripts and move them to Python (#6082) 2020-07-16 11:26:47 -05:00
CONTRIBUTING.rst Link to the documentation on contributing from CONTRIBUTING.rst (#19396) 2021-11-15 15:34:18 -08:00
LICENSE [logging][rfc] add RAY_LOG_EVERY_N and RAY_LOG_EVERY_MS (#17018) 2021-07-13 19:14:28 -07:00
pylintrc RLLIB and pylintrc (#8995) 2020-06-17 18:14:25 +02:00
README.rst [RLlib; Documentation] RLlib README overhaul. (#20249) 2021-11-18 18:08:40 +01:00
scripts Add scripts symlink back (#9219) (#9475) 2020-07-14 12:31:49 -07:00
SECURITY.md Create SECURITY.md (#21521) 2022-01-11 08:54:51 -08:00
setup_hooks.sh Shellcheck quoting (#9596) 2020-07-21 21:56:41 -05:00
WORKSPACE [BUILD] Use bazel-skylib rule to check bazel version (#20990) 2021-12-09 15:25:22 -08:00

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

.. image:: https://readthedocs.org/projects/ray/badge/?version=master
    :target: http://docs.ray.io/en/master/?badge=master

.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
    :target: https://forms.gle/9TSdDYUgxYs8SA9e8

.. image:: https://img.shields.io/badge/Discuss-Ask%20Questions-blue
    :target: https://discuss.ray.io/

.. image:: https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter
    :target: https://twitter.com/raydistributed

|


**Ray provides a simple, universal API for building distributed applications.**

Ray is packaged with the following libraries for accelerating machine learning workloads:

- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `Train`_: Distributed Deep Learning (beta)
- `Datasets`_: Distributed Data Loading and Compute (beta)

As well as libraries for taking ML and distributed apps to production:

- `Serve`_: Scalable and Programmable Serving
- `Workflows`_: Fast, Durable Application Flows (alpha)

There are also many `community integrations <https://docs.ray.io/en/master/ray-libraries.html>`_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here <https://docs.ray.io/en/master/ray-libraries.html>`_.

Install Ray with: ``pip install ray``. For nightly wheels, see the
`Installation page <https://docs.ray.io/en/master/installation.html>`__.

.. _`Modin`: https://github.com/modin-project/modin
.. _`Hugging Face`: https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.hyperparameter_search
.. _`MARS`: https://docs.ray.io/en/latest/data/mars-on-ray.html
.. _`Dask`: https://docs.ray.io/en/latest/data/dask-on-ray.html
.. _`Horovod`: https://horovod.readthedocs.io/en/stable/ray_include.html
.. _`Scikit-learn`: https://docs.ray.io/en/master/joblib.html
.. _`Serve`: https://docs.ray.io/en/master/serve/index.html
.. _`Datasets`: https://docs.ray.io/en/master/data/dataset.html
.. _`Workflows`: https://docs.ray.io/en/master/workflows/concepts.html
.. _`Train`: https://docs.ray.io/en/master/train/train.html


Quick Start
-----------

Execute Python functions in parallel.

.. code-block:: python

    import ray
    ray.init()

    @ray.remote
    def f(x):
        return x * x

    futures = [f.remote(i) for i in range(4)]
    print(ray.get(futures))

To use Ray's actor model:

.. code-block:: python


    import ray
    ray.init()

    @ray.remote
    class Counter(object):
        def __init__(self):
            self.n = 0

        def increment(self):
            self.n += 1

        def read(self):
            return self.n

    counters = [Counter.remote() for i in range(4)]
    [c.increment.remote() for c in counters]
    futures = [c.read.remote() for c in counters]
    print(ray.get(futures))


Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml>`__, and run:

``ray submit [CLUSTER.YAML] example.py --start``

Read more about `launching clusters <https://docs.ray.io/en/master/cluster/index.html>`_.

Tune Quick Start
----------------

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

`Tune`_ is a library for hyperparameter tuning at any scale.

- Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
- Supports any deep learning framework, including PyTorch, `PyTorch Lightning <https://github.com/williamFalcon/pytorch-lightning>`_, TensorFlow, and Keras.
- Visualize results with `TensorBoard <https://www.tensorflow.org/tensorboard>`__.
- Choose among scalable SOTA algorithms such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_.
- Tune integrates with many optimization libraries such as `Facebook Ax <http://ax.dev>`_, `HyperOpt <https://github.com/hyperopt/hyperopt>`_, and `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_ and enables you to scale them transparently.

To run this example, you will need to install the following:

.. code-block:: bash

    $ pip install "ray[tune]"


This example runs a parallel grid search to optimize an example objective function.

.. code-block:: python

    from ray import tune


    def objective(step, alpha, beta):
        return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


    def training_function(config):
        # Hyperparameters
        alpha, beta = config["alpha"], config["beta"]
        for step in range(10):
            # Iterative training function - can be any arbitrary training procedure.
            intermediate_score = objective(step, alpha, beta)
            # Feed the score back back to Tune.
            tune.report(mean_loss=intermediate_score)


    analysis = tune.run(
        training_function,
        config={
            "alpha": tune.grid_search([0.001, 0.01, 0.1]),
            "beta": tune.choice([1, 2, 3])
        })

    print("Best config: ", analysis.get_best_config(metric="mean_loss", mode="min"))

    # Get a dataframe for analyzing trial results.
    df = analysis.results_df

If TensorBoard is installed, automatically visualize all trial results:

.. code-block:: bash

    tensorboard --logdir ~/ray_results

.. _`Tune`: https://docs.ray.io/en/master/tune.html
.. _`Population Based Training (PBT)`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining
.. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#median-stopping-rule-tune-schedulers-medianstoppingrule
.. _`HyperBand/ASHA`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler

RLlib Quick Start
-----------------

.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/rllib/rllib-logo.png

`RLlib`_ is an industry-grade library for reinforcement learning (RL), built on top of Ray.
It offers high scalability and unified APIs for a
`variety of industry- and research applications <https://www.anyscale.com/event-category/ray-summit>`_.

.. code-block:: bash

    $ pip install "ray[rllib]" tensorflow  # or torch


.. Do NOT edit the following code directly in this README! Instead, edit
    the ray/rllib/examples/documentation/rllib_on_ray_readme.py script and then
    copy the new code in here:

.. code-block:: python

    import gym
    from ray.rllib.agents.ppo import PPOTrainer


    # Define your problem using python and openAI's gym API:
    class SimpleCorridor(gym.Env):
        """Corridor in which an agent must learn to move right to reach the exit.

        ---------------------
        | S | 1 | 2 | 3 | G |   S=start; G=goal; corridor_length=5
        ---------------------

        Possible actions to chose from are: 0=left; 1=right
        Observations are floats indicating the current field index, e.g. 0.0 for
        starting position, 1.0 for the field next to the starting position, etc..
        Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
        """

        def __init__(self, config):
            self.end_pos = config["corridor_length"]
            self.cur_pos = 0
            self.action_space = gym.spaces.Discrete(2)  # left and right
            self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))

        def reset(self):
            """Resets the episode and returns the initial observation of the new one.
            """
            self.cur_pos = 0
            # Return initial observation.
            return [self.cur_pos]

        def step(self, action):
            """Takes a single step in the episode given `action`

            Returns:
                New observation, reward, done-flag, info-dict (empty).
            """
            # Walk left.
            if action == 0 and self.cur_pos > 0:
                self.cur_pos -= 1
            # Walk right.
            elif action == 1:
                self.cur_pos += 1
            # Set `done` flag when end of corridor (goal) reached.
            done = self.cur_pos >= self.end_pos
            # +1 when goal reached, otherwise -1.
            reward = 1.0 if done else -0.1
            return [self.cur_pos], reward, done, {}


    # Create an RLlib Trainer instance.
    trainer = PPOTrainer(
        config={
            # Env class to use (here: our gym.Env sub-class from above).
            "env": SimpleCorridor,
            # Config dict to be passed to our custom env's constructor.
            "env_config": {
                # Use corridor with 20 fields (including S and G).
                "corridor_length": 20
            },
            # Parallelize environment rollouts.
            "num_workers": 3,
        })

    # Train for n iterations and report results (mean episode rewards).
    # Since we have to move at least 19 times in the env to reach the goal and
    # each move gives us -0.1 reward (except the last move at the end: +1.0),
    # we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
    for i in range(5):
        results = trainer.train()
        print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")


After training, you may want to perform action computations (inference) in your environment.
Here is a minimal example on how to do this. Also
`check out our more detailed examples here <https://github.com/ray-project/ray/tree/master/rllib/examples/inference_and_serving>`_
(in particular for `normal models <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training.py>`_,
`LSTMs <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_lstm.py>`_,
and `attention nets <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_attention.py>`_).

.. code-block:: python

    # Perform inference (action computations) based on given env observations.
    # Note that we are using a slightly different env here (len 10 instead of 20),
    # however, this should still work as the agent has (hopefully) learned
    # to "just always walk right!"
    env = SimpleCorridor({"corridor_length": 10})
    # Get the initial observation (should be: [0.0] for the starting position).
    obs = env.reset()
    done = False
    total_reward = 0.0
    # Play one episode.
    while not done:
        # Compute a single action, given the current observation
        # from the environment.
        action = trainer.compute_single_action(obs)
        # Apply the computed action in the environment.
        obs, reward, done, info = env.step(action)
        # Sum up rewards for reporting purposes.
        total_reward += reward
    # Report results.
    print(f"Played 1 episode; total-reward={total_reward}")


.. _`RLlib`: https://docs.ray.io/en/master/rllib/index.html


Ray Serve Quick Start
---------------------

.. image:: https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg
  :width: 400

`Ray Serve`_ is a scalable model-serving library built on Ray. It is:

- Framework Agnostic: Use the same toolkit to serve everything from deep
  learning models built with frameworks like PyTorch or Tensorflow & Keras
  to Scikit-Learn models or arbitrary business logic.
- Python First: Configure your model serving declaratively in pure Python,
  without needing YAMLs or JSON configs.
- Performance Oriented: Turn on batching, pipelining, and GPU acceleration to
  increase the throughput of your model.
- Composition Native: Allow you to create "model pipelines" by composing multiple
  models together to drive a single prediction.
- Horizontally Scalable: Serve can linearly scale as you add more machines. Enable
  your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

.. code-block:: bash

    $ pip install scikit-learn
    $ pip install "ray[serve]"

This example runs serves a scikit-learn gradient boosting classifier.

.. code-block:: python

    import pickle
    import requests

    from sklearn.datasets import load_iris
    from sklearn.ensemble import GradientBoostingClassifier

    from ray import serve

    serve.start()

    # Train model.
    iris_dataset = load_iris()
    model = GradientBoostingClassifier()
    model.fit(iris_dataset["data"], iris_dataset["target"])

    @serve.deployment(route_prefix="/iris")
    class BoostingModel:
        def __init__(self, model):
            self.model = model
            self.label_list = iris_dataset["target_names"].tolist()

        async def __call__(self, request):
            payload = await request.json()["vector"]
            print(f"Received flask request with data {payload}")

            prediction = self.model.predict([payload])[0]
            human_name = self.label_list[prediction]
            return {"result": human_name}


    # Deploy model.
    BoostingModel.deploy(model)

    # Query it!
    sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
    response = requests.get("http://localhost:8000/iris", json=sample_request_input)
    print(response.text)
    # Result:
    # {
    #  "result": "versicolor"
    # }


.. _`Ray Serve`: https://docs.ray.io/en/master/serve/index.html

More Information
----------------

- `Documentation`_
- `Tutorial`_
- `Blog`_
- `Ray 1.0 Architecture whitepaper`_ **(new)**
- `Ray Design Patterns`_ **(new)**
- `RLlib paper`_
- `RLlib flow paper`_
- `Tune paper`_

*Older documents:*

- `Ray paper`_
- `Ray HotOS paper`_

.. _`Documentation`: http://docs.ray.io/en/master/index.html
.. _`Tutorial`: https://github.com/ray-project/tutorial
.. _`Blog`: https://medium.com/distributed-computing-with-ray
.. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
.. _`Ray Design Patterns`: https://docs.google.com/document/d/167rnnDFIVRhHhK4mznEIemOtj63IOhtIPvSYaPgI4Fg/edit
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
.. _`RLlib flow paper`: https://arxiv.org/abs/2011.12719
.. _`Tune paper`: https://arxiv.org/abs/1807.05118

Getting Involved
----------------

- `Forum`_: For discussions about development, questions about usage, and feature requests.
- `GitHub Issues`_: For reporting bugs.
- `Twitter`_: Follow updates on Twitter.
- `Slack`_: Join our Slack channel.
- `Meetup Group`_: Join our meetup group.
- `StackOverflow`_: For questions about how to use Ray.

.. _`Forum`: https://discuss.ray.io/
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
.. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
.. _`Twitter`: https://twitter.com/raydistributed
.. _`Slack`: https://forms.gle/9TSdDYUgxYs8SA9e8