mirror of
https://github.com/vale981/ray
synced 2025-03-14 15:16:38 -04:00
![]() Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials. Some overhead is expected. Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com> Signed-off-by: Richard Liaw <rliaw@berkeley.edu> Signed-off-by: Kai Fricke <kai@anyscale.com> Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com> Co-authored-by: Richard Liaw <rliaw@berkeley.edu> Co-authored-by: Kai Fricke <kai@anyscale.com> |
||
---|---|---|
.. | ||
workloads | ||
app_config.yaml | ||
compute_cpu_1.yaml | ||
compute_cpu_4.yaml | ||
compute_cpu_8.yaml | ||
compute_gpu_1.yaml | ||
compute_gpu_2x2.yaml | ||
compute_gpu_4_g4_12xl.yaml | ||
compute_gpu_4x4.yaml | ||
compute_gpu_16.yaml | ||
data_20_nodes.yaml | ||
xgboost_app_config.yaml | ||
xgboost_compute_tpl.yaml |