ray/release/air_tests/air_benchmarks
Kai Fricke d527c7b335
[air/benchmarks] Drop OMP_NUM_THREADS in vanilla torch/tf training (#27256)
Ray automatically sets OMP_NUM_THREADS=1, potentially limiting multithreading in native pytorch/tensorflow. If this leads to performance differences, we should address this either in Ray Train or in Ray core.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-08-02 13:38:01 +01:00
..
workloads [air/benchmarks] Drop OMP_NUM_THREADS in vanilla torch/tf training (#27256) 2022-08-02 13:38:01 +01:00
app_config.yaml [air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436) 2022-07-13 10:53:24 +01:00
compute_cpu_1.yaml [air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436) 2022-07-13 10:53:24 +01:00
compute_cpu_4.yaml [air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436) 2022-07-13 10:53:24 +01:00
compute_cpu_8.yaml [air/benchmarks] train/tune benchmark (#26564) 2022-07-19 18:24:39 +01:00
compute_gpu_1.yaml [AIR][CUJ] Make distributing training benchmark at silver tier (#26640) 2022-07-17 22:07:09 -07:00
compute_gpu_2x2.yaml [air/benchmark] Torch benchmarks for 4x4 (#26692) 2022-07-19 17:06:37 +01:00
compute_gpu_4_g4_12xl.yaml [air/benchmarks] train/tune benchmark (#26564) 2022-07-19 18:24:39 +01:00
compute_gpu_4x4.yaml [air/benchmark] Torch benchmarks for 4x4 (#26692) 2022-07-19 17:06:37 +01:00
compute_gpu_8_g4_12xl.yaml [air] large tune/torch benchmark (#26763) 2022-07-23 01:17:25 -07:00
compute_gpu_16.yaml [AIR][CUJ] Make distributing training benchmark at silver tier (#26640) 2022-07-17 22:07:09 -07:00
data_20_nodes.yaml [air] add bulk ingest benchmarks (#26618) 2022-07-15 22:01:23 -07:00
xgboost_app_config.yaml [air] Add xgboost release test for silver tier(10-node case). (#26460) 2022-07-15 13:21:10 -07:00
xgboost_compute_tpl.yaml [air] Add xgboost release test for silver tier(10-node case). (#26460) 2022-07-15 13:21:10 -07:00