[docs] Why Tune? (#8790)

Co-authored-by: Sumanth Ratna <sumanthratna@gmail.com>
This commit is contained in:
Richard Liaw 2020-06-11 11:23:36 -07:00 committed by GitHub
parent 8d1ccfd0f7
commit 91d55f52ab
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 55 additions and 19 deletions

View file

@ -42,10 +42,10 @@ Ray uses Tasks (functions) and Actors (Classes) to allow you to parallelize your
The Ray Community
-----------------
Ray is more than a framework for distributed applications but also an active community of developers,
researchers, and folks that love machine learning.
Ray is more than a framework for distributed applications but also an active community of developers,
researchers, and folks that love machine learning.
You can join (and Star!) us on `on GitHub`_.
You can join (and Star!) us on `on GitHub`_.
You can also join `community slack <https://forms.gle/9TSdDYUgxYs8SA9e8>`_ to discuss Ray! The community is extremely active in helping people succeed in building their ray applications.
You can also get started by visiting our `Tutorials <https://github.com/ray-project/tutorial>`_. For the latest wheels (nightlies), see the `installation page <installation.html>`__.

View file

@ -11,9 +11,8 @@ Tune is a Python library for experiment execution and hyperparameter tuning at a
* Launch a multi-node :ref:`distributed hyperparameter sweep <tune-distributed>` in less than 10 lines of code.
* Supports any machine learning framework, :ref:`including PyTorch, XGBoost, MXNet, and Keras<tune-guides-overview>`.
* Natively :ref:`integrates with optimization libraries <tune-search-alg>` such as `HyperOpt <https://github.com/hyperopt/hyperopt>`_, `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_, and `Facebook Ax <http://ax.dev>`_.
* Choose among :ref:`scalable algorithms <tune-schedulers>` such as :ref:`Population Based Training (PBT) <tune-scheduler-pbt>`, :ref:`Vizier's Median Stopping Rule <tune-scheduler-msr>`, :ref:`HyperBand/ASHA <tune-scheduler-hyperband>`.
* Visualize results with `TensorBoard <https://www.tensorflow.org/get_started/summaries_and_tensorboard>`__.
* Automatically manages ref:`checkpoints <tune-checkpoint>`_ and logging to `TensorBoard <tune-logging>`__.
* Choose among state of the art algorithms such as :ref:`Population Based Training (PBT) <tune-scheduler-pbt>`, :ref:`BayesOptSearch <bayesopt>`, :ref:`HyperBand/ASHA <tune-scheduler-hyperband>`.
* Move your models from training to serving on the same infrastructure with `Ray Serve`_.
.. _`Ray Serve`: rayserve/overview.html
@ -52,9 +51,16 @@ If using TF2 and TensorBoard, Tune will also automatically generate TensorBoard
.. tip:: Join the `Ray community slack <https://forms.gle/9TSdDYUgxYs8SA9e8>`_ to discuss Ray Tune (and other Ray libraries)!
Guides/Materials
Why choose Tune?
----------------
There are many other hyperparameter optimization libraries out there. If you're new to Tune, you're probably wondering, "what makes Tune different?"
.. include:: tune/why_tune.rst
Reference Materials
-------------------
Here are some reference materials for Tune:
* :ref:`Tune Tutorials, Guides, and Examples <tune-guides-overview>`
@ -69,18 +75,6 @@ Below are some blog posts and talks about Tune:
- [video] `Talk given at RISECamp 2018 <https://www.youtube.com/watch?v=38Yd_dXW51Q>`_
- [video] `A Guide to Modern Hyperparameter Optimization (PyData LA 2019) <https://www.youtube.com/watch?v=10uz5U3Gy6E>`_ (`slides <https://speakerdeck.com/richardliaw/a-modern-guide-to-hyperparameter-optimization>`_)
Open Source Projects using Tune
-------------------------------
Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project.
- `Softlearning <https://github.com/rail-berkeley/softlearning>`_: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
- `Flambe <https://github.com/asappresearch/flambe>`_: An ML framework to accelerate research and its path to production. See `flambe.ai <https://flambe.ai>`_.
- `Population Based Augmentation <https://github.com/arcelien/pba>`_: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute.
- `Fast AutoAugment by Kakao <https://github.com/kakaobrain/fast-autoaugment>`_: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching.
- `Allentune <https://github.com/allenai/allentune>`_: Hyperparameter Search for AllenNLP from AllenAI.
- `machinable <https://github.com/frthjf/machinable>`_: A modular configuration system for machine learning research. See `machinable.org <https://machinable.org>`_.
Citing Tune
-----------

View file

@ -176,3 +176,15 @@ Contributed Examples
- `pbt_tune_cifar10_with_keras <https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/pbt_tune_cifar10_with_keras.py>`__: A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.
- `genetic_example <https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/genetic_example.py>`__: Optimizing the michalewicz function using the contributed GeneticSearch algorithm with AsyncHyperBandScheduler.
- `tune_cifar10_gluon <https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/tune_cifar10_gluon.py>`__: MXNet Gluon example to use Tune with the function-based API on CIFAR-10 dataset.
Open Source Projects using Tune
-------------------------------
Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project.
- `Softlearning <https://github.com/rail-berkeley/softlearning>`_: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
- `Flambe <https://github.com/asappresearch/flambe>`_: An ML framework to accelerate research and its path to production. See `flambe.ai <https://flambe.ai>`_.
- `Population Based Augmentation <https://github.com/arcelien/pba>`_: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute.
- `Fast AutoAugment by Kakao <https://github.com/kakaobrain/fast-autoaugment>`_: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching.
- `Allentune <https://github.com/allenai/allentune>`_: Hyperparameter Search for AllenNLP from AllenAI.
- `machinable <https://github.com/frthjf/machinable>`_: A modular configuration system for machine learning research. See `machinable.org <https://machinable.org>`_.

View file

@ -0,0 +1,30 @@
Tune offers cutting-edge optimization algorithms.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a user, you're probably looking into hyperparameter optimization because you want to quickly increase your model performance.
Tune enables you to leverage a variety of these cutting edge optimization algorithms, reducing the cost of tuning by `aggressively terminating bad hyperparameter evaluations <tune-scheduler-hyperband>`_, intelligently :ref:`choosing better parameters to evaluate <tune-search-alg>`, or even :ref:`changing the hyperparameters during training <tune-scheduler-pbt>` to optimize hyperparameter schedules.
Tune simplifies your workflow.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A key problem with machine learning frameworks is the need to restructure all of your code to fit the framework.
With Tune, you can optimize your model just by :ref:`adding a few code snippets <tune-tutorial>`.
Further, Tune actually removes boilerplate from your code training workflow, automatically :ref:`managing checkpoints <tune-checkpoint>` and :ref:`logging results to tools <tune-logging>` such as MLFlow and TensorBoard.
Tune provides first-class multi-GPU & distributed training support.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hyperparameter tuning is known to be highly time-consuming, so it is often necessary to parallelize this process. Most other tuning frameworks require you to implement your own multi-process framework or build your own distributed system to speed up hyperparameter tuning.
However, Tune allows you to transparently :ref:`parallelize across multiple GPUs and multiple nodes <tune-parallelism>`. Tune even has seamless :ref:`fault tolerance and cloud support <tune-distributed>`, allowing you to scale up your hyperparameter search by 100x while reducing costs by up to 10x by using cheap preemptible instances.
What if I'm already doing hyperparameter tuning?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You might be already using an existing hyperparameter tuning tool such as HyperOpt or Bayesian Optimization.
In this situation, Tune actually allows you to power up your existing workflow. Tune's :ref:`Search Algorithms <tune-search-alg>` integrate with a variety of popular hyperparameter tuning libraries (such as Nevergrad or HyperOpt) and allow you to seamlessly scale up your optimization process -- without sacrificing performance.