mirror of
https://github.com/vale981/ray
synced 2025-03-06 02:21:39 -05:00
[docs] Edit survey links (#6777)
This commit is contained in:
parent
92380dd4e6
commit
a3a268435f
6 changed files with 8 additions and 6 deletions
|
@ -7,6 +7,8 @@ RaySGD: Distributed Deep Learning
|
|||
|
||||
RaySGD is a lightweight library for distributed deep learning, providing thin wrappers around framework-native modules for data parallel training.
|
||||
|
||||
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
|
||||
|
||||
The main features are:
|
||||
|
||||
- Ease of use: Scale Pytorch's native ``DistributedDataParallel`` and TensorFlow's ``tf.distribute.MirroredStrategy`` without needing to monitor individual nodes.
|
||||
|
|
|
@ -3,6 +3,8 @@ RaySGD Pytorch
|
|||
|
||||
.. warning:: This is still an experimental API and is subject to change in the near future.
|
||||
|
||||
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
|
||||
|
||||
Ray's ``PyTorchTrainer`` simplifies distributed model training for PyTorch. The ``PyTorchTrainer`` is a wrapper around ``torch.distributed.launch`` with a Python API to easily incorporate distributed training into a larger Python application, as opposed to needing to execute training outside of Python.
|
||||
|
||||
----------
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
RaySGD TensorFlow
|
||||
=================
|
||||
|
||||
.. warning:: This is still an experimental API and is subject to change in the near future.
|
||||
|
||||
.. tip:: Help us make RaySGD better; take this 1 minute `User Survey <https://forms.gle/26EMwdahdgm7Lscy9>`_!
|
||||
|
||||
RaySGD's ``TFTrainer`` simplifies distributed model training for Tensorflow. The ``TFTrainer`` is a wrapper around ``MultiWorkerMirroredStrategy`` with a Python API to easily incorporate distributed training into a larger Python application, as opposed to write custom logic of setting environments and starting separate processes.
|
||||
|
||||
.. important:: This API has only been tested with TensorFlow2.0rc and is still highly experimental. Please file bug reports if you run into any - thanks!
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
Tune Walkthrough
|
||||
================
|
||||
|
||||
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
|
||||
|
||||
This tutorial will walk you through the following process to setup a Tune experiment. Specifically, we'll leverage ASHA and Bayesian Optimization (via HyperOpt) via the following steps:
|
||||
|
||||
1. Integrating Tune into your workflow
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
Tune User Guide
|
||||
===============
|
||||
|
||||
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
|
||||
|
||||
Tune Overview
|
||||
-------------
|
||||
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
Tune: A Scalable Hyperparameter Tuning Library
|
||||
==============================================
|
||||
|
||||
.. tip:: Help make Tune better by taking our 3 minute `Ray Tune User Survey <https://forms.gle/7u5eH1avbTfpZ3dE6>`_!
|
||||
|
||||
.. image:: images/tune.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
|
Loading…
Add table
Reference in a new issue