![]() * factoring out object_info for general use by several Ray components * addressing comments * Replace SHA256 task hash with MD5 Add object hash to object table (always overwrites) Support for table operations that span multiple asynchronous Redis commands Add a new object location in a transaction, using Redis's optimistic concurrency Use Redis GETSET instead of transactions and Python frontend code for object hashing Remove spurious log message Fix for object_table_add Revert "Replace SHA256 task hash with MD5" This reverts commit e599de473c8dad9189ccb0600429534b469b76a2. Revert to sha256 Test case for illegal puts Use SETNX to set object hashes Initialize digest with zeros Initialize plasma_request with zeros * Fixes * replace SHA256 with a faster hash in the object store * Fix valgrind * Address Robert's comments * Check that plasma_compute_object_hash succeeds. * Don't run test_illegal_put test with valgrind because it causes an intentional crash which causes valgrind to complain. * Debugging after rebase. * handling Robert's comments * Fix bugs after rebase. * final fixes for Stephanie's PR * fix |
||
---|---|---|
.travis | ||
cmake/Modules | ||
doc | ||
docker | ||
examples | ||
lib/python | ||
numbuf | ||
scripts | ||
src | ||
test | ||
vsprojects | ||
webui | ||
.clang-format | ||
.editorconfig | ||
.gitignore | ||
.gitmodules | ||
.travis.yml | ||
build-docker.sh | ||
build-webui.sh | ||
build.sh | ||
install-dependencies.sh | ||
LICENSE | ||
pylintrc | ||
Ray.sln | ||
README.md |
Ray
Ray is an experimental distributed extension of Python. It is under development and not ready to be used.
The goal of Ray is to make it easy to write machine learning applications that run on a cluster while providing the development and debugging experience of working on a single machine.
Before jumping into the details, here's a simple Python example for doing a Monte Carlo estimation of pi (using multiple cores or potentially multiple machines).
import ray
import numpy as np
# Start a scheduler, an object store, and some workers.
ray.init(start_ray_local=True, num_workers=10)
# Define a remote function for estimating pi.
@ray.remote
def estimate_pi(n):
x = np.random.uniform(size=n)
y = np.random.uniform(size=n)
return 4 * np.mean(x ** 2 + y ** 2 < 1)
# Launch 10 tasks, each of which estimates pi.
result_ids = []
for _ in range(10):
result_ids.append(estimate_pi.remote(100))
# Fetch the results of the tasks and print their average.
estimate = np.mean(ray.get(result_ids))
print "Pi is approximately {}.".format(estimate)
Within the for loop, each call to estimate_pi.remote(100)
sends a message to
the scheduler asking it to schedule the task of running estimate_pi
with the
argument 100
. This call returns right away without waiting for the actual
estimation of pi to take place. Instead of returning a float, it returns an
object ID, which represents the eventual output of the computation (this is
a similar to a Future).
The call to ray.get(result_id)
takes an object ID and returns the actual
estimate of pi (waiting until the computation has finished if necessary).