mirror of
https://github.com/vale981/ray
synced 2025-03-06 10:31:39 -05:00

Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials. Some overhead is expected. Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com> Signed-off-by: Richard Liaw <rliaw@berkeley.edu> Signed-off-by: Kai Fricke <kai@anyscale.com> Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com> Co-authored-by: Richard Liaw <rliaw@berkeley.edu> Co-authored-by: Kai Fricke <kai@anyscale.com>
15 lines
280 B
YAML
15 lines
280 B
YAML
cloud_id: {{env["ANYSCALE_CLOUD_ID"]}}
|
|
region: us-west-2
|
|
|
|
max_workers: 7
|
|
|
|
head_node_type:
|
|
name: head_node
|
|
instance_type: m5.2xlarge
|
|
|
|
worker_node_types:
|
|
- name: worker_node
|
|
instance_type: m5.2xlarge
|
|
max_workers: 7
|
|
min_workers: 7
|
|
use_spot: false
|