2020-04-25 18:25:56 -07:00
.. _tune-60-seconds:
2020-07-29 11:22:44 -07:00
============
Key Concepts
============
2020-04-25 18:25:56 -07:00
Let's quickly walk through the key concepts you need to know to use Tune. In this guide, we'll be covering the following:
.. contents ::
:local:
:depth: 1
.. image :: /images/tune-workflow.png
Trainables
----------
2020-04-27 18:01:00 -07:00
Tune will optimize your training process using the :ref: `Trainable API <trainable-docs>` . To start, let's try to maximize this objective function:
2020-04-25 18:25:56 -07:00
.. code-block :: python
2020-04-27 18:01:00 -07:00
def objective(x, a, b):
return a * (x * * 0.5) + b
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
Here's an example of specifying the objective function using :ref: `the function-based Trainable API <tune-function-api>` :
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
.. code-block :: python
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
def trainable(config):
# config (dict): A dict of hyperparameters.
for x in range(20):
score = objective(x, config["a"], config["b"])
2020-05-16 12:55:08 -07:00
tune.report(score=score) # This sends the score to Tune.
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
Now, there's two Trainable APIs - one being the :ref: `function-based API <tune-function-api>` that we demonstrated above.
2020-04-25 18:25:56 -07:00
2020-06-15 10:42:54 -07:00
The other is a :ref: `class-based API <tune-class-api>` . Here's an example of specifying the objective function using the :ref: `class-based API <tune-class-api>` :
2020-04-25 18:25:56 -07:00
.. code-block :: python
2020-04-27 18:01:00 -07:00
from ray import tune
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
class Trainable(tune.Trainable):
2020-07-01 11:00:00 -07:00
def setup(self, config):
2020-04-27 18:01:00 -07:00
# config (dict): A dict of hyperparameters
self.x = 0
self.a = config["a"]
self.b = config["b"]
2020-07-01 11:00:00 -07:00
def step(self): # This is called iteratively.
2020-04-27 18:01:00 -07:00
score = objective(self.x, self.a, self.b)
self.x += 1
return {"score": score}
2020-04-25 18:25:56 -07:00
2020-05-16 12:55:08 -07:00
.. tip :: Do not use `` tune.report `` within a `` Trainable `` class.
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
See the documentation: :ref: `trainable-docs` and :ref: `examples <tune-general-examples>` .
2020-04-25 18:25:56 -07:00
tune.run
--------
2020-04-27 18:01:00 -07:00
Use `` tune.run `` execute hyperparameter tuning using the core Ray APIs. This function manages your experiment and provides many features such as :ref: `logging <tune-logging>` , :ref: `checkpointing <tune-checkpoint>` , and :ref: `early stopping <tune-stopping>` .
2020-04-25 18:25:56 -07:00
.. code-block :: python
# Pass in a Trainable class or function to tune.run.
tune.run(trainable)
2020-04-27 18:01:00 -07:00
This function will report status on the command line until all trials stop (each trial is one instance of a :ref: `Trainable <trainable-docs>` ):
2020-04-25 18:25:56 -07:00
.. code-block :: bash
== Status ==
Memory usage on this node: 11.4/16.0 GiB
Using FIFO scheduling algorithm.
2020-04-27 18:01:00 -07:00
Resources requested: 1/12 CPUs, 0/0 GPUs, 0.0/3.17 GiB heap, 0.0/1.07 GiB objects
2020-04-25 18:25:56 -07:00
Result logdir: /Users/foo/ray_results/myexp
2020-04-27 18:01:00 -07:00
Number of trials: 1 (1 RUNNING)
2020-04-25 18:25:56 -07:00
+----------------------+----------+---------------------+-----------+--------+--------+----------------+-------+
2020-04-27 18:01:00 -07:00
| Trial name | status | loc | a | b | score | total time (s) | iter |
2020-04-25 18:25:56 -07:00
|----------------------+----------+---------------------+-----------+--------+--------+----------------+-------|
| MyTrainable_a826033a | RUNNING | 10.234.98.164:31115 | 0.303706 | 0.0761 | 0.1289 | 7.54952 | 15 |
+----------------------+----------+---------------------+-----------+--------+--------+----------------+-------+
2020-04-27 18:01:00 -07:00
You can also easily run 10 trials. Tune automatically :ref: `determines how many trials will run in parallel <tune-parallelism>` .
.. code-block :: python
tune.run(trainable, num_samples=10)
Finally, you can randomly sample or grid search hyperparameters via Tune's :ref: `search space API <tune-default-search-space>` :
.. code-block :: python
space = {"x": tune.uniform(0, 1)}
tune.run(my_trainable, config=space, num_samples=10)
See more documentation: :ref: `tune-run-ref` .
2020-04-25 18:25:56 -07:00
2020-09-05 02:39:51 +01:00
Search spaces
-------------
To optimize your hyperparameters, you have to define a *search space* .
A search space defines valid values for your hyperparameters and can specify
how these values are sampled (e.g. from a uniform distribution or a normal
distribution).
Tune offers various functions to define search spaces and sampling methods.
:ref: `You can find the documentation of these search space definitions here <tune-sample-docs>` .
Usually you pass your search space definition in the `config` parameter of
`` tune.run() `` .
Here's an example covering all search space functions. Again,
:ref: `here is the full explanation of all these functions <tune-sample-docs>` .
.. code-block :: python
config = {
"uniform": tune.uniform(-5, -1), # Uniform float between -5 and -1
"quniform": tune.quniform(3.2, 5.4, 0.2), # Round to increments of 0.2
"loguniform": tune.loguniform(1e-4, 1e-2), # Uniform float in log space
"qloguniform": tune.qloguniform(1e-4, 1e-1, 5e-4), # Round to increments of 0.0005
"randn": tune.randn(10, 2), # Normal distribution with mean 10 and sd 2
"qrandn": tune.qrandn(10, 2, 0.2), # Round to increments of 0.2
"randint": tune.randint(-9, 15), # Random integer between -9 and 15
"qrandint": tune.qrandint(-21, 12, 3), # Round to increments of 3 (includes 12)
"choice": tune.choice(["a", "b", "c"]), # Choose one of these options uniformly
"func": tune.sample_from(lambda spec: spec.config.uniform * 0.01), # Depends on other value
"grid": tune.grid_search([32, 64, 128]) # Search over all these values
}
2020-04-25 18:25:56 -07:00
Search Algorithms
-----------------
2020-04-27 18:01:00 -07:00
To optimize the hyperparameters of your training process, you will want to use a :ref: `Search Algorithm <tune-search-alg>` which will help suggest better hyperparameters.
2020-04-25 18:25:56 -07:00
.. code-block :: python
2020-04-27 18:01:00 -07:00
# Be sure to first run `pip install hyperopt`
2020-04-25 18:25:56 -07:00
import hyperopt as hp
from ray.tune.suggest.hyperopt import HyperOptSearch
# Create a HyperOpt search space
2020-09-05 02:39:51 +01:00
config = {
"a": tune.uniform(0, 1),
"b": tune.uniform(0, 20)
2020-04-27 18:01:00 -07:00
# Note: Arbitrary HyperOpt search spaces should be supported!
2020-09-05 02:39:51 +01:00
# "foo": tune.randn(0, 1))
2020-04-27 18:01:00 -07:00
}
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
# Specify the search space and maximize score
2020-09-05 02:39:51 +01:00
hyperopt = HyperOptSearch(metric="score", mode="max")
2020-04-27 18:01:00 -07:00
# Execute 20 trials using HyperOpt and stop after 20 iterations
tune.run(
trainable,
2020-09-05 02:39:51 +01:00
config=config,
2020-04-27 18:01:00 -07:00
search_alg=hyperopt,
num_samples=20,
stop={"training_iteration": 20}
)
2020-09-05 02:39:51 +01:00
Tune has SearchAlgorithms that integrate with many popular **optimization** libraries, such as :ref: `Nevergrad <nevergrad>` and :ref: `Hyperopt <tune-hyperopt>` . Tune automatically converts the provided search space into the search
spaces the search algorithms/underlying library expect.
.. note ::
We are currently in the process of implementing automatic search space
conversions for all search algorithms. Currently this works for
AxSearch, BayesOpt, Hyperopt and Optuna. The other search algorithms
will follow shortly, but have to be instantiated with their respective
search spaces at the moment.
2020-04-25 18:25:56 -07:00
2020-05-17 12:19:44 -07:00
See the documentation: :ref: `tune-search-alg` .
2020-04-25 18:25:56 -07:00
Trial Schedulers
----------------
2020-04-27 18:01:00 -07:00
In addition, you can make your training process more efficient by using a :ref: `Trial Scheduler <tune-schedulers>` .
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
Trial Schedulers can stop/pause/tweak the hyperparameters of running trials, making your hyperparameter tuning process much faster.
2020-04-25 18:25:56 -07:00
.. code-block :: python
from ray.tune.schedulers import HyperBandScheduler
2020-04-27 18:01:00 -07:00
# Create HyperBand scheduler and maximize score
hyperband = HyperBandScheduler(metric="score", mode="max")
2020-04-25 18:25:56 -07:00
# Execute 20 trials using HyperBand using a search space
2020-04-27 18:01:00 -07:00
configs = {"a": tune.uniform(0, 1), "b": tune.uniform(0, 1)}
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
tune.run(
MyTrainableClass,
config=configs,
num_samples=20,
scheduler=hyperband
)
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
:ref: `Population-based Training <tune-scheduler-pbt>` and :ref: `HyperBand <tune-scheduler-hyperband>` are examples of popular optimization algorithms implemented as Trial Schedulers.
2020-04-25 18:25:56 -07:00
2020-04-27 18:01:00 -07:00
Unlike **Search Algorithms** , :ref: `Trial Scheduler <tune-schedulers>` do not select which hyperparameter configurations to evaluate. However, you can use them together.
See the documentation: :ref: `schedulers-ref` .
2020-04-25 18:25:56 -07:00
Analysis
--------
2020-04-27 18:01:00 -07:00
`` tune.run `` returns an :ref: `Analysis <tune-analysis-docs>` object which has methods you can use for analyzing your training.
2020-04-25 18:25:56 -07:00
.. code-block :: python
analysis = tune.run(trainable, search_alg=algo, stop={"training_iteration": 20})
# Get the best hyperparameters
best_hyperparameters = analysis.get_best_config()
2020-04-27 18:01:00 -07:00
This object can also retrieve all training runs as dataframes, allowing you to do ad-hoc data analysis over your results.
.. code-block :: python
# Get a dataframe for the max score seen for each trial
df = analysis.dataframe(metric="score", mode="max")
2020-04-25 18:25:56 -07:00
What's Next?
2020-07-29 11:22:44 -07:00
-------------
2020-04-25 18:25:56 -07:00
Now that you have a working understanding of Tune, check out:
2020-07-29 11:22:44 -07:00
* :doc: `/tune/user-guide` : A comprehensive overview of Tune's features.
* :ref: `tune-guides` : Tutorials for using Tune with your preferred machine learning library.
* :doc: `/tune/examples/index` : End-to-end examples and templates for using Tune with your preferred machine learning library.
2020-04-25 18:25:56 -07:00
* :ref: `tune-tutorial` : A simple tutorial that walks you through the process of setting up a Tune experiment.
Further Questions or Issues?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reach out to us if you have any questions or issues or feedback through the following channels:
1. `StackOverflow`_ : For questions about how to use Ray.
2. `GitHub Issues`_ : For bug reports and feature requests.
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues