No description
Find a file
2017-02-27 12:24:07 -08:00
.travis Updated tfutils to use new op naming (#284) 2017-02-15 17:47:53 -08:00
cmake/Modules help cmake find right python interpreter on mac (#251) 2016-07-11 12:16:10 -07:00
doc Remove old files and remove old documentation for copying files around cluster. (#274) 2017-02-13 11:20:04 -08:00
docker fix docker build bug (#207) 2017-01-18 23:23:34 -08:00
examples Added functionality for retrieving variables from control dependencies (#220) 2017-01-30 19:17:42 -08:00
python Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
scripts Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
src Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
test Availability after worker failure (#316) 2017-02-25 20:19:36 -08:00
vsprojects Windows compatibility (#57) 2016-11-22 17:04:24 -08:00
webui Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
.clang-format Implement object table notification subscriptions and switch to using Redis modules for object table. (#134) 2016-12-18 18:19:02 -08:00
.editorconfig Update Windows support (#317) 2016-07-28 13:11:13 -07:00
.gitignore Switch build system to use CMake completely. (#200) 2017-01-17 16:56:40 -08:00
.travis.yml Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
build-docker.sh updated Docker files (#171) 2016-12-31 17:21:33 -08:00
build.sh Switch build system to use CMake completely. (#200) 2017-01-17 16:56:40 -08:00
CMakeLists.txt Rename photon -> local scheduler. (#322) 2017-02-27 12:24:07 -08:00
LICENSE Change license to Apache 2 (#20) 2016-11-01 23:19:06 -07:00
pylintrc adding pylint (#233) 2016-07-08 12:39:11 -07:00
Ray.sln Windows compatibility (#57) 2016-11-22 17:04:24 -08:00
README.md Add documentation for upgrading a Ray cluster. (#256) 2017-02-09 11:55:37 -08:00

Ray

Build Status

Ray is an experimental distributed execution engine. It is under development and not ready to be used.

The goal of Ray is to make it easy to write machine learning applications that run on a cluster while providing the development and debugging experience of working on a single machine.

Before jumping into the details, here's a simple Python example for doing a Monte Carlo estimation of pi (using multiple cores or potentially multiple machines).

import ray
import numpy as np

# Start Ray with some workers.
ray.init(num_workers=10)

# Define a remote function for estimating pi.
@ray.remote
def estimate_pi(n):
  x = np.random.uniform(size=n)
  y = np.random.uniform(size=n)
  return 4 * np.mean(x ** 2 + y ** 2 < 1)

# Launch 10 tasks, each of which estimates pi.
result_ids = []
for _ in range(10):
  result_ids.append(estimate_pi.remote(100))

# Fetch the results of the tasks and print their average.
estimate = np.mean(ray.get(result_ids))
print("Pi is approximately {}.".format(estimate))

Within the for loop, each call to estimate_pi.remote(100) sends a message to the scheduler asking it to schedule the task of running estimate_pi with the argument 100. This call returns right away without waiting for the actual estimation of pi to take place. Instead of returning a float, it returns an object ID, which represents the eventual output of the computation (this is a similar to a Future).

The call to ray.get(result_id) takes an object ID and returns the actual estimate of pi (waiting until the computation has finished if necessary).

Next Steps

Example Applications