ray/release/air_tests/air_benchmarks/compute_cpu_8.yaml
xwjiang2010 75027eb479
[air/benchmarks] train/tune benchmark (#26564)
Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials.
Some overhead is expected.

Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Signed-off-by: Richard Liaw <rliaw@berkeley.edu>
Signed-off-by: Kai Fricke <kai@anyscale.com>

Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-07-19 18:24:39 +01:00

15 lines
280 B
YAML

cloud_id: {{env["ANYSCALE_CLOUD_ID"]}}
region: us-west-2
max_workers: 7
head_node_type:
name: head_node
instance_type: m5.2xlarge
worker_node_types:
- name: worker_node
instance_type: m5.2xlarge
max_workers: 7
min_workers: 7
use_spot: false