* added class median_stopping_result to schedulers and updated __init__
* Dicts flatten and combine schedulers.
MedianStoppingRule is now combined with MedianStoppingResult; I think
the functionality is essentially the same so there's no need to
duplicate.
Dict flattening was already taken care of in a separate PR, so I've
reverted that.
* lint
* revert
* remove time sharing and simplify state
* fix
* fixtests
* added class median_stopping_result to schedulers and updated __init__
* update property names and types to reflect suggestions by ray developers, merged get_median_result and get_best_result into a single method to eliminate duplicate steps, added resource check on PAUSE condition, modified utility function to use updated properties
* updated tests for median_stopping_result in separate file
* remove stray characters from previous merge conflict
* reformatted and cleaned up dependencies from running code format and linting
* added class median_stopping_result to schedulers and updated __init__
* Dicts flatten and combine schedulers.
MedianStoppingRule is now combined with MedianStoppingResult; I think
the functionality is essentially the same so there's no need to
duplicate.
Dict flattening was already taken care of in a separate PR, so I've
reverted that.
* lint
* revert
* remove time sharing and simplify state
* fix
* added class median_stopping_result to schedulers and updated __init__
* update property names and types to reflect suggestions by ray developers, merged get_median_result and get_best_result into a single method to eliminate duplicate steps, added resource check on PAUSE condition, modified utility function to use updated properties
* updated tests for median_stopping_result in separate file
* remove stray characters from previous merge conflict
* reformatted and cleaned up dependencies from running code format and linting
* update scheduler to coordinate eval interval
* modify median_stopping_result to synchronize result evaluation at regular intervals, driven by least common interval
* add some logging info to median_result
* add new scheduler, SyncMedianStoppingResult, which evaluates and stops trials in a synchronous fashion
* Cleanup median_stopping_rule
- remove eval_interval
- pause trials with insufficient samples if there are other waiting trials
- compute score only for trials that have reached result_time
* Remove extraneous classes
* Fix median stopping rule tests
* Added min_time_slice flag to reduce potential checkpointing cost
* Only compute mean after grace
* Relegate logging to debug mode
* Implement metric interface
* Address comment: made actor_handles a dict
* Fix iteration
* Lint
* Mark lightweight actors as num_cpus=0 to prevent resource starvation
* Be more explicit about the readiness condition
* Make task_runner non-blocking
* Lint
* Advertise that Python >= 3.6 is needed
ray/tune/examples/ax_example.py contains f-strings which limits support of this package to Python 3.6 and up.
* Python 3.5 does not support f-strings
Rewrite by using format()
* Lower required version after 9f88fe9d
* Remove python_requires again by request
* Fix linter warning
* Implement flask_request and named python request
* Forgot to include missing files
* Address comment
* Add flask to requirements for doc (lint failed)
* Update doc requirement so lint will build
* Install flask in CI
* Fix typo in .travis.yml
* Add example file
* Move into train function
* Somewhat working example of MemNN, still has some failed trials
* Reorganize into a class
* Small fixes
* Iteration decrease and fix hyperparam_mutations
* Add example file
* Move into train function
* Somewhat working example of MemNN, still has some failed trials
* Reorganize into a class
* Small fixes
* Iteration decrease and fix hyperparam_mutations
* Some style edits
* Address PR changes without modifying learning rate
* Add configs and hyperparameter mutations
* Add tune test
* Modify import locations
* Some parameter changes for testing
* Update memnn example
* Add tensorboard support and address PR comment
* Final changes
* lint
* generator
* object copy optimization
* see if we can reuse the Arrow parallel_memcopy
* remove unused function
* restore the original code, since later experiments show that it has little impact on performance.
* lint