[docs] Edit survey links (#6777)

This commit is contained in:
Richard Liaw 2020-01-17 11:52:04 -08:00 committed by GitHub
parent 92380dd4e6
commit a3a268435f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 8 additions and 6 deletions

View file

@ -7,6 +7,8 @@ RaySGD: Distributed Deep Learning
RaySGD is a lightweight library for distributed deep learning, providing thin wrappers around framework-native modules for data parallel training.
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
The main features are:
- Ease of use: Scale Pytorch's native ``DistributedDataParallel`` and TensorFlow's ``tf.distribute.MirroredStrategy`` without needing to monitor individual nodes.

View file

@ -3,6 +3,8 @@ RaySGD Pytorch
.. warning:: This is still an experimental API and is subject to change in the near future.
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
Ray's ``PyTorchTrainer`` simplifies distributed model training for PyTorch. The ``PyTorchTrainer`` is a wrapper around ``torch.distributed.launch`` with a Python API to easily incorporate distributed training into a larger Python application, as opposed to needing to execute training outside of Python.
----------

View file

@ -1,6 +1,10 @@
RaySGD TensorFlow
=================
.. warning:: This is still an experimental API and is subject to change in the near future.
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
RaySGD's ``TFTrainer`` simplifies distributed model training for Tensorflow. The ``TFTrainer`` is a wrapper around ``MultiWorkerMirroredStrategy`` with a Python API to easily incorporate distributed training into a larger Python application, as opposed to write custom logic of setting environments and starting separate processes.
.. important:: This API has only been tested with TensorFlow2.0rc and is still highly experimental. Please file bug reports if you run into any - thanks!

View file

@ -1,8 +1,6 @@
Tune Walkthrough
================
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
This tutorial will walk you through the following process to setup a Tune experiment. Specifically, we'll leverage ASHA and Bayesian Optimization (via HyperOpt) via the following steps:
1. Integrating Tune into your workflow

View file

@ -1,8 +1,6 @@
Tune User Guide
===============
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
Tune Overview
-------------

View file

@ -1,8 +1,6 @@
Tune: A Scalable Hyperparameter Tuning Library
==============================================
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
.. image:: images/tune.png
:scale: 30%
:align: center