* [xray] Throttle task dispatch by required resources
* Pass in number of initial workers into raylet command
* Workers blocked in a ray.get release resources
* separate task placement and task dispatch; throttle task dispatch with locally available resournces
* keep track of worker's being started/in flight and suppress starting extraneous workers
* cleanup comments
* remove early termination in task dispatch to support zero-resource actor tasks
* info -> debug
* add documentation
* linting
* mock the worker pool for testing
* some linting
* kill all workers in flight; clear the worker pool in dtor
* remove fixed todo
* lint
* eval now works without assignment - helper function a bit hacky
* removed df.copy() from eval_helper
* one test still failing for qury
* all eval tests passing now
* added check to eval arge verification
* added tests to travis
* added optimization and some comments
* added pd.eval and passes all tests
* added ray dataframe back to test file
* optimizations and code cleanup for eval
* changed position of pandas import in __init__
* fixed linting errors
* fixing eval in __init__.py
* fixed travis file - removed extra tests
* removed test directory from linting exclude for travis
* Allow numpy arrays and larger objects to be passed by value in task specifications.
* Fix bug.
* Fix bug. Inline all bug numpy object arrays.
* Increase size limit for inlining args in task spec.
* Give numpy init different signatures in Python 2 and Python 3.
* Simplify code.
* Fix test.
* Use import_array1 instead of import_array.
* working with dataframes with too many rows and columns
* repr works for jupyter notebooks now
* added comments and test file
* added repr test file to .travis.yml
* added back ray.dataframe as pd to test file
* fixed pandas importing issues in test file
* getting the front and back of df more efficiently
* only keeping dataframe tests in travis
* fixing numpy array for row and col lengths issue
* doesn't add dimensions if df is small enough
* implemented memory_usage()
* completed memory_usage - still failing 2 tests
* only failing one test for memory_usage
* all repr and dataframes tests passing now
* fixing error related to python2 in info()
* fixing python2 errors
* fixed linting errosr
* using _arithmetic_helper in memory_usage()
* fixed last lint error
* removed testing-specific code
* adding back travis test
* removing extra tests from travis
* re-added concat test
* fixes with new indexing scheme
* code cleanup
* fully working with new indexing scheme
* added tests for info and memory_usage
* removed test file
* baseline impl for index_df.py
* added skeleton for index_df.py
* initial impl index_df
* separate out partition and non-partition impls
* add len function
* drop returns index_df slice of dropped indices
* housecleaning
* Integrate index overhaul
* Rename index df to index metadata
* Fix flake8 issues
* Addressing issues
* fix import issue
* Added metadata passing to constructor
adding tests
fixing flake8
adding init
flake 8 on test
fixing tests, imports, and flake8
handling for index
adding tests for row, index
added more robust error handling for axis
fixing test failures
cleaning up error sfor 2.7
updating travis
resolving import
fixing flake8
moved import order
Fixing to refactor and delaying implementing ray-pd inner concat
resolving ray-pd concat and from_pandas mutation
Revert "resolving ray-pd concat and from_pandas mutation"
This reverts commit 5db43e4e89e328286532f3ef98a4526575c5d08d.
* Add raylet monitor script to timeout Raylet heartbeats
* Unit test for removing a different client from the client table
* Set node manager heartbeat according to global config
* Doc and fixes
* Add regression test for client table disconnect, refactor client table
* Fix linting.
* Integrate worker with raylet.
* Begin allowing worker to attach to cluster.
* Fix linting and documentation.
* Fix linting.
* Comment tests back in.
* Fix type of worker command.
* Remove xray python files and tests.
* Fix from rebase.
* Add test.
* Copy over raylet executable.
* Small cleanup.
* [tune] Added pbt with keras on cifar10 dataset example
* ENH: add gpu resources
* CLN: requires 4 GPUs resource
* CLN: use single quotes
* CLN: don't save model by default