[docs] change MLFlow to MLflow in docs (#13739)

This commit is contained in:
architkulkarni 2021-01-27 16:53:15 -08:00 committed by GitHub
parent 25fa391193
commit 28cf5f91e3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 7 additions and 7 deletions

View file

@ -71,9 +71,9 @@ Take a look at any of the below tutorials to get started with Tune.
:description: :doc:`Track your experiment process with the Weights & Biases tools <tune-wandb>`
.. customgalleryitem::
:tooltip: Use MLFlow with Ray Tune.
:tooltip: Use MLflow with Ray Tune.
:figure: /images/mlflow.png
:description: :doc:`Log and track your hyperparameter sweep with MLFlow Tracking & AutoLogging <tune-mlflow>`
:description: :doc:`Log and track your hyperparameter sweep with MLflow Tracking & AutoLogging <tune-mlflow>`
.. raw:: html

View file

@ -162,7 +162,7 @@ CSVLogger
MLFlowLogger
------------
Tune also provides a default logger for `MLFlow <https://mlflow.org>`_. You can install MLFlow via ``pip install mlflow``.
Tune also provides a default logger for `MLflow <https://mlflow.org>`_. You can install MLflow via ``pip install mlflow``.
You can see the :doc:`tutorial here </tune/tutorials/tune-mlflow>`.
WandbLogger

View file

@ -82,13 +82,13 @@ Pytorch Lightning
- :doc:`/tune/examples/mnist_pytorch_lightning`: A comprehensive example using `Pytorch Lightning <https://github.com/PyTorchLightning/pytorch-lightning>`_ to train a MNIST model. This example showcases how to use various search optimization techniques. It utilizes the Ray Tune-provided :ref:`PyTorch Lightning callbacks <tune-integration-pytorch-lightning>`.
- :ref:`A walkthrough tutorial for using Ray Tune with Pytorch-Lightning <tune-pytorch-lightning>`.
Wandb, MLFlow
Wandb, MLflow
~~~~~~~~~~~~~
- :ref:`Tutorial <tune-wandb>` for using `wandb <https://www.wandb.ai/>`__ with Ray Tune
- :doc:`/tune/examples/wandb_example`: Example for using `Weights and Biases <https://www.wandb.ai/>`__ with Ray Tune.
- :doc:`/tune/examples/mlflow_example`: Example for using `MLFlow <https://github.com/mlflow/mlflow/>`__ with Ray Tune.
- :doc:`/tune/examples/mlflow_ptl_example`: Example for using `MLFlow <https://github.com/mlflow/mlflow/>`__ and `Pytorch Lightning <https://github.com/PyTorchLightning/pytorch-lightning>`_ with Ray Tune.
- :doc:`/tune/examples/mlflow_example`: Example for using `MLflow <https://github.com/mlflow/mlflow/>`__ with Ray Tune.
- :doc:`/tune/examples/mlflow_ptl_example`: Example for using `MLflow <https://github.com/mlflow/mlflow/>`__ and `Pytorch Lightning <https://github.com/PyTorchLightning/pytorch-lightning>`_ with Ray Tune.
Tensorflow/Keras
~~~~~~~~~~~~~~~~

View file

@ -73,7 +73,7 @@ A key problem with machine learning frameworks is the need to restructure all of
With Tune, you can optimize your model just by :ref:`adding a few code snippets <tune-tutorial>`.
Further, Tune actually removes boilerplate from your code training workflow, automatically :ref:`managing checkpoints <tune-checkpoint>` and :ref:`logging results to tools <tune-logging>` such as MLFlow and TensorBoard.
Further, Tune actually removes boilerplate from your code training workflow, automatically :ref:`managing checkpoints <tune-checkpoint>` and :ref:`logging results to tools <tune-logging>` such as MLflow and TensorBoard.
Multi-GPU & distributed training out of the box