ray/doc/source/tune/api_docs/grid_random.rst
krfricke 06af62ba91
[tune] refactor tune search space (#10444)
* Added basic functionality and tests

* Feature parity with old tune search space config

* Convert Optuna search spaces

* Introduced quantized values

* Updated Optuna resolving

* Added HyperOpt search space conversion

* Convert search spaces to AxSearch

* Convert search spaces to BayesOpt

* Added basic functionality and tests

* Feature parity with old tune search space config

* Convert Optuna search spaces

* Introduced quantized values

* Updated Optuna resolving

* Added HyperOpt search space conversion

* Convert search spaces to AxSearch

* Convert search spaces to BayesOpt

* Re-factored samplers into domain classes

* Re-added base classes

* Re-factored into list comprehensions

* Added `from_config` classmethod for config conversion

* Applied suggestions from code review

* Removed truncated normal distribution

* Set search properties in tune.run

* Added test for tune.run search properties

* Move sampler initializers to base classes

* Add tune API sampling test, fixed includes, fixed resampling bug

* Add to API docs

* Fix docs

* Update metric and mode only when set. Set default metric and mode to experiment analysis object.

* Fix experiment analysis tests

* Raise error when delimiter is used in the config keys

* Added randint/qrandint to API docs, added additional check in tune.run

* Fix tests

* Fix linting error

* Applied suggestions from code review. Re-aded tune.function for the time being

* Fix sampling tests

* Fix experiment analysis tests

* Fix tests and linting error

* Removed unnecessary default_config attribute from OptunaSearch

* Revert to set AxSearch default metric

* fix-min-max

* fix

* nits

* Added function check, enhanced loguniform error message

* fix-print

* fix

* fix

* Raise if unresolved values are in config and search space is already set

Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2020-09-03 09:06:13 -07:00

223 lines
6.1 KiB
ReStructuredText

.. _tune-grid-random:
Grid/Random Search
==================
Overview
--------
Tune has a native interface for specifying a grid search or random search. You can specify the search space via ``tune.run(config=...)``.
Thereby, you can either use the ``tune.grid_search`` primitive to specify an axis of a grid search...
.. code-block:: python
tune.run(
trainable,
config={"bar": tune.grid_search([True, False])})
... or one of the random sampling primitives to specify distributions (:ref:`tune-sample-docs`):
.. code-block:: python
tune.run(
trainable,
config={
"param1": tune.choice([True, False]),
"bar": tune.uniform(0, 10),
"alpha": tune.sample_from(lambda _: np.random.uniform(100) ** 2),
"const": "hello" # It is also ok to specify constant values.
})
.. caution:: If you use a Search Algorithm, you may not be able to specify lambdas or grid search with this
interface, as the search algorithm may require a different search space declaration.
To sample multiple times/run multiple trials, specify ``tune.run(num_samples=N``. If ``grid_search`` is provided as an argument, the *same* grid will be repeated ``N`` times.
.. code-block:: python
# 13 different configs.
tune.run(trainable, num_samples=13, config={
"x": tune.choice([0, 1, 2]),
}
)
# 13 different configs.
tune.run(trainable, num_samples=13, config={
"x": tune.choice([0, 1, 2]),
"y": tune.randn([0, 1, 2]),
}
)
# 4 different configs.
tune.run(trainable, config={"x": tune.grid_search([1, 2, 3, 4])}, num_samples=1)
# 3 different configs.
tune.run(trainable, config={"x": grid_search([1, 2, 3])}, num_samples=1)
# 6 different configs.
tune.run(trainable, config={"x": tune.grid_search([1, 2, 3])}, num_samples=2)
# 9 different configs.
tune.run(trainable, num_samples=1, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
# 18 different configs.
tune.run(trainable, num_samples=2, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
# 45 different configs.
tune.run(trainable, num_samples=5, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
Note that grid search and random search primitives are inter-operable. Each can be used independently or in combination with each other.
.. code-block:: python
# 6 different configs.
tune.run(trainable, num_samples=2, config={
"x": tune.sample_from(...),
"y": tune.grid_search([a, b, c])
}
)
In the below example, ``num_samples=10`` repeats the 3x3 grid search 10 times, for a total of 90 trials, each with randomly sampled values of ``alpha`` and ``beta``.
.. code-block:: python
:emphasize-lines: 12
tune.run(
my_trainable,
name="my_trainable",
# num_samples will repeat the entire config 10 times.
num_samples=10
config={
# ``sample_from`` creates a generator to call the lambda once per trial.
"alpha": tune.sample_from(lambda spec: np.random.uniform(100)),
# ``sample_from`` also supports "conditional search spaces"
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal()),
"nn_layers": [
# tune.grid_search will make it so that all values are evaluated.
tune.grid_search([16, 64, 256]),
tune.grid_search([16, 64, 256]),
],
},
)
.. _tune_custom-search:
Custom/Conditional Search Spaces
--------------------------------
You'll often run into awkward search spaces (i.e., when one hyperparameter depends on another). Use ``tune.sample_from(func)`` to provide a **custom** callable function for generating a search space.
The parameter ``func`` should take in a ``spec`` object, which has a ``config`` namespace from which you can access other hyperparameters. This is useful for conditional distributions:
.. code-block:: python
tune.run(
...,
config={
# A random function
"alpha": tune.sample_from(lambda _: np.random.uniform(100)),
# Use the `spec.config` namespace to access other hyperparameters
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal())
}
)
Here's an example showing a grid search over two nested parameters combined with random sampling from two lambda functions, generating 9 different trials. Note that the value of ``beta`` depends on the value of ``alpha``, which is represented by referencing ``spec.config.alpha`` in the lambda function. This lets you specify conditional parameter distributions.
.. code-block:: python
:emphasize-lines: 4-11
tune.run(
my_trainable,
name="my_trainable",
config={
"alpha": tune.sample_from(lambda spec: np.random.uniform(100)),
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal()),
"nn_layers": [
tune.grid_search([16, 64, 256]),
tune.grid_search([16, 64, 256]),
],
}
)
.. _tune-sample-docs:
Random Distributions API
------------------------
tune.randn
~~~~~~~~~~
.. autofunction:: ray.tune.randn
tune.qrandn
~~~~~~~~~~~
.. autofunction:: ray.tune.qrandn
tune.loguniform
~~~~~~~~~~~~~~~
.. autofunction:: ray.tune.loguniform
tune.qloguniform
~~~~~~~~~~~~~~~~
.. autofunction:: ray.tune.qloguniform
tune.uniform
~~~~~~~~~~~~
.. autofunction:: ray.tune.uniform
tune.quniform
~~~~~~~~~~~~~
.. autofunction:: ray.tune.quniform
tune.randint
~~~~~~~~~~~~
.. autofunction:: ray.tune.randint
tune.qrandint
~~~~~~~~~~~~~
.. autofunction:: ray.tune.qrandint
tune.choice
~~~~~~~~~~~
.. autofunction:: ray.tune.choice
tune.sample_from
~~~~~~~~~~~~~~~~
.. autofunction:: ray.tune.sample_from
Grid Search API
---------------
.. autofunction:: ray.tune.grid_search
Internals
---------
BasicVariantGenerator
~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ray.tune.suggest.BasicVariantGenerator