* Refactor local scheduler to remove worker indices.
* Change scheduling state enum to int in all function signatures.
* Bug fix, don't use pointers into a resizable array.
* Remove total_num_workers.
* Fix tests.
* start updating cluster documentation with parallel ssh
* add using ray on a large cluster
* revert changes to using ray on a cluster
* update cluster documentation
* update title
* Some formatting changes, and added some notes.
* clarification
* Add warning about public versus private IP addresses.
* Typos and wording.
* Clarifications.
* Clarifications.
* First pass at reconstruction in the worker
Modify reconstruction stress testing to start Plasma service before rest of Ray cluster
TODO about reconstructing ray.puts
Fix ray.put error for double creates
Distinguish between empty entry and no entry in object table
Fix test case
Fix Python test
Fix tests
* Only call reconstruct on objects we have not yet received
* Address review comments
* Fix reconstruction for Python3
* remove unused code
* Address Robert's comments, stress tests are crashing
* Test and update the task's scheduling state to suppress duplicate
reconstruction requests.
* Split result table into two lookups, one for task ID and the other as a
test-and-set for the task state
* Fix object table tests
* Fix redis module result_table_lookup test case
* Multinode reconstruction tests
* Fix python3 test case
* rename
* Use new start_redis
* Remove unused code
* lint
* indent
* Address Robert's comments
* Use start_redis from ray.services in state table tests
* Remove unnecessary memset
* Added test for retriving variables from an optimizer
* Added comments to test
* Addressed comments
* Fixed travis bug
* Added fix to circular controls
* Added set for explored operations and duplicate prefix stripping
* Removed embeded ipython
* Removed prefix, use seperate graph for each network
* Removed redundant imports
* Addressed comments and added separate graph to initializer
* fix typos
* get rid of prefix in documentation
* Provide functionality for local scheduler to start new workers.
* Pass full command for starting new worker in to local scheduler.
* Separate out configuration state of local scheduler.
* Use object_info as notification, not just the object_id
* Add a regression test for plasma managers connecting to store after some objects have been created
* Send notifications for existing objects to new plasma subscribers
* Continuously try the request to the plasma manager instead of setting a timeout in the test case
* Use ray.services to start Redis in plasma test cases
* fix test case
* Optimizations:
- Track mapping of missing object to dependent tasks to avoid iterating over task queue
- Perform all fetch requests for missing objects using the same timer
* Fix bug and add regression test
* Record task dependencies and active fetch requests in the same hash table
* fix typo
* Fix memory leak and add test cases for scheduling when dependencies are evicted
* Fix python3 test case
* Minor details.
* Change plasma_get to take a timeout and an array of object IDs.
* Address comments.
* Bug fix related to computing object hashes.
* Add test.
* Fix file descriptor leak.
* Fix valgrind.
* Formatting.
* Remove call to plasma_contains from the plasma client. Use timeout internally in ray.get.
* small fixes
* Split local scheduler task queue into waiting and dispatch queue
* Fix memory leak
* Add a new task scheduling status for when a task has been queued locally
* Fix global scheduler test case and add task status doc
* Documentation
* Address Philipp's comments
* Move tasks back to the waiting queue if their dependencies become unavailable
* Update existing task table entries instead of overwriting
* Prevent plasma store and manager from dying when a worker dies.
* Check errno inside of warn_if_sigpipe. Passing in errno doesn't work because the arguments to warn_if_sigpipe can be evaluated out of order.
* Remove start_ray_local from ray.init and change default number of workers to 10.
* Remove alexnet example.
* Move array methods to experimental.
* Remove TRPO example.
* Remove old files.
* Compile plasma when we build numbuf.
* Address comments.
* Updated code to mesh with get_weights returning a dict and new tf code
* Added tf.global_variables_initalizer to hyperopt example as well
* Small fix.
* Small name change.
* Added helper class for getting tf variables from loss function
* Updated usage and documentation
* Removed try-catches
* Added futures
* Added documentation
* fixes and tests
* more tests
* install tensorflow in travis
* check available shared memory when starting object store
* exit with error if not enough shared memory available for object store
* Some comments and formatting.