* Ignore deleted clients when reading address info from Redis
* Remove self from db_client table when exiting cleanly
* Fix valgrind test
* Do not call plasma_perform_release when disconnecting
* Fix worker blocked bug
* tmp
* Push an error to the driver on ray.put for non-driver tasks
* Fix result table tests
* Fix test, logging
* Address comments
* Fix suppression bug
* Fix redis module test
* Edit error message
* Get values in chunks during reconstruction
* Test case for driver ray.put errors
* Error for evicting ray.put objects from the driver
* Fix tests
* Reduce verbosity
* Documentation
* Failing test case
* Local scheduler exits cleanly after plasma store dies
* Tolerate one plasma store failure
* Tolerate plasma store failures on all nodes except head node
* Plasma manager heartbeats
* Component failure tests
* Don't run the helper for Python testing
* Fix C test
* Fix hanging plasma transfer test
* Fix python3
* Consolidate ClientConnection code
* Fix valgrind test
* fix c test
* We can restart worker nodes!
* Fix flatbuffers bug
* Address comments
* Only register actual workers with the local scheduler
* Fix bug
* Fix segfaults
* Add test case that tests for driver liveness, fix local scheduler bug
* Clean up after tests
* Allocate retry info on the stack
* Send SIGKILL before waiting
* Relax unit test conditions
* Driver liveness test case and documentation
* Start process for monitoring log files and push changes to redis.
* Display log files in UI.
* Bug fix for recent tasks.
* Use flatbuffers to parse local scheduler heartbeats.
* Compile the Ray redis module with C++.
* Redo parsing of object table notifications with flatbuffers.
* Update redis module python tests.
* Redo parsing of task table notifications with flatbuffers.
* Fix linting.
* Redo parsing of db client notifications with flatbuffers.
* Redo publishing of local scheduler heartbeats with flatbuffers.
* Fix linting.
* Remove usage of fixed-width formatting of scheduling state in channel name.
* Reply with flatbuffer object to task table queries, also simplify redis string to flatbuffer string conversion.
* Fix linting and tests.
* fix
* cleanup
* simplify logic in ReplyWithTask
* Initial conversion
* Further changes
* fixes
* some changes
* Fixes
* Added data pipeline
* Added updates to cifar
* Currently borken need sep pr
* Added test for retriving variables from an optimizer
* Removed FlAG ref in environment variables
* Added comments to test
* Addressed comments
* Added updates
* Made further changes for tfutils
* Fixed finalized bug
* Removed ipython
* Added accuracy printing
* Temp commit
* added fixes
* changes
* Added writing to file
* Fixes for gpus
* Cleaned up code
* Temp commit
* Gpu support fully implemented
* Updated to use num_gpus for actors
* Finished testing gpus implementation
* Changed to be more in line with origin implementation
* Updated test to use actors
* Added support for cpu only systems
* Now works with no cpus
* Minor changes and some documentation.
* WARN instead of FATAL for object hash mismatches, push error to driver
* Document the callback signature for object_table_add/remove
* Error table
* Wait for all errors in python test
* Fix doc
* Fix state test
* Clean up plasma subscribers on EPIPE
First pass at a monitoring script - monitor can detect local scheduler death
Clean up task table upon local scheduler death in monitoring script
Don't schedule to dead local schedulers in global scheduler
Have global scheduler update the db clients table, monitor script cleans up state
Documentation
Monitor script should scan tables before beginning to read from subscription channel
Fix for python3
Redirect monitor output to redis logs, fix hanging in multinode tests
* Publish auxiliary addresses as part of db_client deletion notifications
* Fix test case?
* Small changes.
* Use SCAN instead of KEYS
* Address comments
* Address more comments
* Free redis module strings
* Availability after a killed worker
* Workers exit cleanly
* Memory cleanup in photon C tests
* Worker failure in multinode
* Consolidate worker cleanup handlers
* Update the result table before handling a task submission
* KILL_WORKER_TIMEOUT -> KILL_WORKER_TIMEOUT_MILLISECONDS
* Log a warning instead of crashing if no result table entry found
* Add num_cpus and num_gpus to actor decorator.
* Assign GPU IDs to actors.
* Add additional actor test.
* Remove duplicated line.
* Factor out local scheduler selection method.
* Add test and simplify local scheduler selection.
* First pass at a policy to solve deadlock
* Address Robert's comments
* stress test
* unit test
* Fix test cases
* Fix test for python3
* add more logging
* White space.
* Remove import counter and export counter.
* Provide isolation between drivers for remote functions.
* Add test for driver function isolation.
* Hash source code into function ID to reduce likelihood of collisions.
* Fix failure test example.
* Replace assertTrue with assertIn to improve failure messages in tests.
* Fix failure test.
* Implement actor field for tasks
* Implement actor management in local scheduler.
* initial python frontend for actors
* import actors on worker
* IPython code completion and tests
* prepare creating actors through local schedulers
* add actor id to PyTask
* submit actor calls to local scheduler
* starting to integrate
* simple fix
* Fixes from rebasing.
* more work on python actors
* Improve local scheduler actor handlers.
* Pass actor ID to local scheduler when connecting a client.
* first working version of actors
* fixing actors
* fix creating two copies of the same actor
* fix actors
* remove sleep
* get rid of export synchronization
* update
* insert actor methods into the queue in the right order
* remove print statements
* make it compile again after rebase
* Minor updates.
* fix python actor ids
* Pass actor_id to start_worker.
* add test
* Minor changes.
* Update actor tests.
* Temporary plan for import counter.
* Temporarily fix import counters.
* Fix some tests.
* Fixes.
* Make actor creation non-blocking.
* Fix test?
* Fix actors on Python 2.
* fix rare case.
* Fix python 2 test.
* More tests.
* Small fixes.
* Linting.
* Revert tensorflow version to 0.12.0 temporarily.
* Small fix.
* Enhance inheritance test.
* Attempt to start web UI when starting Ray.
* Add instructions for using web UI to cluster documentation.
* Don't check if port 8080 is open.
* Remove print statement.
* Start and clean up workers from the local scheduler
Ability to kill workers in photon scheduler
Test for old method of starting workers
Common codepath for killing workers
Common codepath for killing workers
Photon test case for starting and killing workers
fix build
Fix component failure test
Register a worker's pid as part of initial connection
Address comments and revert photon_connect
Set PATH during travis install
Fix
* Fix photon test case to accept clients on plasma manager fd
* attribute-based heterogeneity-awareness in global scheduler and photon
* minor post-rebase fix
* photon: enforce dynamic capacity constraint on task dispatch
* globalsched: cap the number of times we try to schedule a task in round robin
* propagating ability to specify resource capacity to ray.init
* adding resources to remote function export and fetch/register
* globalsched: remove unused functions; update cached photon resource capacity (until next photon heartbeat)
* Add some integration tests.
* globalsched: cleanup + factor out constraint checking
* lots of style
* task_spec_required_resource: global refactor
* clang format
* clang format + comment update in photon
* clang format photon comment
* valgrind
* reduce verbosity for Travis
* Add test for scheduler load balancing.
* addressing comments
* refactoring global scheduler algorithm
* Minor cleanups.
* Linting.
* Fix array_test.py and linting.
* valgrind fix for photon tests
* Attempt to fix stress tests.
* fix hashmap free
* fix hashmap free comment
* memset photon resource vectors to 0 in case they get used before the first heartbeat
* More whitespace changes.
* Undo whitespace error I introduced.
* Added example for compute grads in ray
* Added formatting
* Removed need for placeholders in apply gradient
* Streamlined examples
* Fixed docs
* Added formatting
* Removed old references
* Simplified code some
* Addressed comments
* Changes to first code block
* Added test for training and updated code snippets
* Formatting
* Removed mean
* Removed all mention of mean
* Added comments
* Added comments
* First pass at reconstruction in the worker
Modify reconstruction stress testing to start Plasma service before rest of Ray cluster
TODO about reconstructing ray.puts
Fix ray.put error for double creates
Distinguish between empty entry and no entry in object table
Fix test case
Fix Python test
Fix tests
* Only call reconstruct on objects we have not yet received
* Address review comments
* Fix reconstruction for Python3
* remove unused code
* Address Robert's comments, stress tests are crashing
* Test and update the task's scheduling state to suppress duplicate
reconstruction requests.
* Split result table into two lookups, one for task ID and the other as a
test-and-set for the task state
* Fix object table tests
* Fix redis module result_table_lookup test case
* Multinode reconstruction tests
* Fix python3 test case
* rename
* Use new start_redis
* Remove unused code
* lint
* indent
* Address Robert's comments
* Use start_redis from ray.services in state table tests
* Remove unnecessary memset
* Added test for retriving variables from an optimizer
* Added comments to test
* Addressed comments
* Fixed travis bug
* Added fix to circular controls
* Added set for explored operations and duplicate prefix stripping
* Removed embeded ipython
* Removed prefix, use seperate graph for each network
* Removed redundant imports
* Addressed comments and added separate graph to initializer
* fix typos
* get rid of prefix in documentation
* Provide functionality for local scheduler to start new workers.
* Pass full command for starting new worker in to local scheduler.
* Separate out configuration state of local scheduler.
* Use object_info as notification, not just the object_id
* Add a regression test for plasma managers connecting to store after some objects have been created
* Send notifications for existing objects to new plasma subscribers
* Continuously try the request to the plasma manager instead of setting a timeout in the test case
* Use ray.services to start Redis in plasma test cases
* fix test case
* Optimizations:
- Track mapping of missing object to dependent tasks to avoid iterating over task queue
- Perform all fetch requests for missing objects using the same timer
* Fix bug and add regression test
* Record task dependencies and active fetch requests in the same hash table
* fix typo
* Fix memory leak and add test cases for scheduling when dependencies are evicted
* Fix python3 test case
* Minor details.