mirror of
https://github.com/vale981/ray
synced 2025-03-08 19:41:38 -05:00
.. | ||
analysis | ||
automl | ||
automlboard | ||
examples | ||
integration | ||
schedulers | ||
suggest | ||
tests | ||
track | ||
utils | ||
__init__.py | ||
BUILD | ||
checkpoint_manager.py | ||
cluster_info.py | ||
commands.py | ||
config_parser.py | ||
durable_trainable.py | ||
error.py | ||
experiment.py | ||
function_runner.py | ||
logger.py | ||
progress_reporter.py | ||
ray_trial_executor.py | ||
README.rst | ||
registry.py | ||
requirements-dev.txt | ||
resources.py | ||
result.py | ||
sample.py | ||
scripts.py | ||
session.py | ||
sklearn.py | ||
stopper.py | ||
sync_client.py | ||
syncer.py | ||
trainable.py | ||
trial.py | ||
trial_executor.py | ||
trial_runner.py | ||
tune.py | ||
TuneClient.ipynb | ||
web_server.py |
Tune: Scalable Hyperparameter Tuning ==================================== Tune is a scalable framework for hyperparameter search with a focus on deep learning and deep reinforcement learning. User documentation can be `found here <http://docs.ray.io/en/latest/tune.html>`__. Tutorial -------- To get started with Tune, try going through `our tutorial of using Tune with Keras <https://github.com/ray-project/tutorial/blob/master/tune_exercises/exercise_1_basics.ipynb>`__. (Experimental): You can try out `the above tutorial on a free hosted server via Binder <https://mybinder.org/v2/gh/ray-project/tutorial/master?filepath=tune_exercises%2Fexercise_1_basics.ipynb>`__. Citing Tune ----------- If Tune helps you in your academic research, you are encouraged to cite `our paper <https://arxiv.org/abs/1807.05118>`__. Here is an example bibtex: .. code-block:: tex @article{liaw2018tune, title={Tune: A Research Platform for Distributed Model Selection and Training}, author={Liaw, Richard and Liang, Eric and Nishihara, Robert and Moritz, Philipp and Gonzalez, Joseph E and Stoica, Ion}, journal={arXiv preprint arXiv:1807.05118}, year={2018} }