* Provide functionality for local scheduler to start new workers.
* Pass full command for starting new worker in to local scheduler.
* Separate out configuration state of local scheduler.
* Use object_info as notification, not just the object_id
* Add a regression test for plasma managers connecting to store after some objects have been created
* Send notifications for existing objects to new plasma subscribers
* Continuously try the request to the plasma manager instead of setting a timeout in the test case
* Use ray.services to start Redis in plasma test cases
* fix test case
* Optimizations:
- Track mapping of missing object to dependent tasks to avoid iterating over task queue
- Perform all fetch requests for missing objects using the same timer
* Fix bug and add regression test
* Record task dependencies and active fetch requests in the same hash table
* fix typo
* Fix memory leak and add test cases for scheduling when dependencies are evicted
* Fix python3 test case
* Minor details.
* Change plasma_get to take a timeout and an array of object IDs.
* Address comments.
* Bug fix related to computing object hashes.
* Add test.
* Fix file descriptor leak.
* Fix valgrind.
* Formatting.
* Remove call to plasma_contains from the plasma client. Use timeout internally in ray.get.
* small fixes
* Split local scheduler task queue into waiting and dispatch queue
* Fix memory leak
* Add a new task scheduling status for when a task has been queued locally
* Fix global scheduler test case and add task status doc
* Documentation
* Address Philipp's comments
* Move tasks back to the waiting queue if their dependencies become unavailable
* Update existing task table entries instead of overwriting
* Prevent plasma store and manager from dying when a worker dies.
* Check errno inside of warn_if_sigpipe. Passing in errno doesn't work because the arguments to warn_if_sigpipe can be evaluated out of order.