ray/release/air_tests/air_benchmarks/workloads
xwjiang2010 75027eb479
[air/benchmarks] train/tune benchmark (#26564)
Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials.
Some overhead is expected.

Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Signed-off-by: Richard Liaw <rliaw@berkeley.edu>
Signed-off-by: Kai Fricke <kai@anyscale.com>

Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-07-19 18:24:39 +01:00
..
_tensorflow_prepare.py [air/benchmarks] Add distributed Tensorflow benchmarks (CPU only) (#26519) 2022-07-14 22:08:43 +01:00
_torch_prepare.py [air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436) 2022-07-13 10:53:24 +01:00
benchmark_util.py [air/benchmarks] train/tune benchmark (#26564) 2022-07-19 18:24:39 +01:00
data_benchmark.py [air] Allow users to use instances of ScalingConfig (#25712) 2022-07-18 15:46:58 -07:00
gpu_batch_prediction.py [AIR] Update Torch benchmarks with documentation (#26631) 2022-07-16 17:58:21 -07:00
pytorch_training_e2e.py [air] Allow users to use instances of ScalingConfig (#25712) 2022-07-18 15:46:58 -07:00
tensorflow_benchmark.py [air/benchmark] Torch benchmarks for 4x4 (#26692) 2022-07-19 17:06:37 +01:00
torch_benchmark.py [air/benchmark] Torch benchmarks for 4x4 (#26692) 2022-07-19 17:06:37 +01:00
tune_torch_benchmark.py [air/benchmarks] train/tune benchmark (#26564) 2022-07-19 18:24:39 +01:00
xgboost_benchmark.py [air] Allow users to use instances of ScalingConfig (#25712) 2022-07-18 15:46:58 -07:00