mirror of
https://github.com/vale981/ray
synced 2025-03-14 07:06:38 -04:00
![]() Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials. Some overhead is expected. Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com> Signed-off-by: Richard Liaw <rliaw@berkeley.edu> Signed-off-by: Kai Fricke <kai@anyscale.com> Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com> Co-authored-by: Richard Liaw <rliaw@berkeley.edu> Co-authored-by: Kai Fricke <kai@anyscale.com> |
||
---|---|---|
.. | ||
_tensorflow_prepare.py | ||
_torch_prepare.py | ||
benchmark_util.py | ||
data_benchmark.py | ||
gpu_batch_prediction.py | ||
pytorch_training_e2e.py | ||
tensorflow_benchmark.py | ||
torch_benchmark.py | ||
tune_torch_benchmark.py | ||
xgboost_benchmark.py |