Tune: A Scalable Hyperparameter Tuning Library ============================================== .. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey `_! .. image:: images/tune.png :scale: 30% :align: center Tune is a Python library for hyperparameter tuning at any scale. Core features: * Launch a multi-node `distributed hyperparameter sweep `_ in less than 10 lines of code. * Supports any machine learning framework, including PyTorch, XGBoost, MXNet, and Keras. See `examples here `_. * Natively `integrates with optimization libraries `_ such as `HyperOpt `_, `Bayesian Optimization `_, and `Facebook Ax `_. * Choose among `scalable algorithms `_ such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_. * Visualize results with `TensorBoard `__. .. _`Population Based Training (PBT)`: tune-schedulers.html#population-based-training-pbt .. _`Vizier's Median Stopping Rule`: tune-schedulers.html#median-stopping-rule .. _`HyperBand/ASHA`: tune-schedulers.html#asynchronous-hyperband For more information, check out: * `Code `__: GitHub repository for Tune. * `User Guide `__: A comprehensive overview on how to use Tune's features. * `Tutorial Notebooks `__: Our tutorial notebooks of using Tune with Keras or PyTorch. **Try out a tutorial notebook on Colab**: .. raw:: html Tune Tutorial Quick Start ----------- To run this example, you will need to install the following: .. code-block:: bash $ pip install ray[tune] torch torchvision filelock This example runs a small grid search to train a CNN using PyTorch and Tune. .. literalinclude:: ../../python/ray/tune/tests/example.py :language: python :start-after: __quick_start_begin__ :end-before: __quick_start_end__ If TensorBoard is installed, automatically visualize all trial results: .. code-block:: bash tensorboard --logdir ~/ray_results .. image:: images/tune-start-tb.png :scale: 30% :align: center If using TF2 and TensorBoard, Tune will also automatically generate TensorBoard HParams output: .. image:: images/tune-hparams-coord.png :scale: 20% :align: center Distributed Quick Start ----------------------- 1. Import and initialize Ray by appending the following to your example script. .. code-block:: python # Append to top of your script import ray import argparse parser = argparse.ArgumentParser() parser.add_argument("--ray-address") args = parser.parse_args() ray.init(address=args.ray_address) Alternatively, download a full example script here: :download:`mnist_pytorch.py <../../python/ray/tune/examples/mnist_pytorch.py>` 2. Download the following example Ray cluster configuration as ``tune-local-default.yaml`` and replace the appropriate fields: .. literalinclude:: ../../python/ray/tune/examples/tune-local-default.yaml :language: yaml Alternatively, download it here: :download:`tune-local-default.yaml <../../python/ray/tune/examples/tune-local-default.yaml>`. See `Ray cluster docs here `_. 3. Run ``ray submit`` like the following. .. code-block:: bash ray submit tune-local-default.yaml mnist_pytorch.py --args="--ray-address=localhost:6379" --start This will start Ray on all of your machines and run a distributed hyperparameter search across them. To summarize, here are the full set of commands: .. code-block:: bash wget https://raw.githubusercontent.com/ray-project/ray/master/python/ray/tune/examples/mnist_pytorch.py wget https://raw.githubusercontent.com/ray-project/ray/master/python/ray/tune/tune-local-default.yaml ray submit tune-local-default.yaml mnist_pytorch.py --args="--ray-address=localhost:6379" --start Take a look at the `Distributed Experiments `_ documentation for more details, including: 1. Setting up distributed experiments on your local cluster 2. Using AWS and GCP 3. Spot instance usage/pre-emptible instances, and more. Talks and Blogs --------------- Below are some blog posts and talks about Tune: - [blog] `Tune: a Python library for fast hyperparameter tuning at any scale `_ - [blog] `Cutting edge hyperparameter tuning with Ray Tune `_ - [blog] `Simple hyperparameter and architecture search in tensorflow with Ray Tune `_ - [slides] `Talk given at RISECamp 2019 `_ - [Talk] `Talk given at RISECamp 2018 `_ Open Source Projects using Tune ------------------------------- Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project. - `Softlearning `_: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm. - `Flambe `_: An ML framework to accelerate research and its path to production. See `flambe.ai `_. - `Population Based Augmentation `_: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute. - `Fast AutoAugment by Kakao `_: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching. - `Allentune `_: Hyperparameter Search for AllenNLP from AllenAI. Citing Tune ----------- If Tune helps you in your academic research, you are encouraged to cite `our paper `__. Here is an example bibtex: .. code-block:: tex @article{liaw2018tune, title={Tune: A Research Platform for Distributed Model Selection and Training}, author={Liaw, Richard and Liang, Eric and Nishihara, Robert and Moritz, Philipp and Gonzalez, Joseph E and Stoica, Ion}, journal={arXiv preprint arXiv:1807.05118}, year={2018} }