ray/release/air_tests/air_benchmarks/app_config.yaml
Kai Fricke cf75cf7232
[air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436)
This PR adds a distributed benchmark test for Pytorch MNIST training. It compares training with Ray AIR with training with vanilla PyTorch.

In both cases, the same training loop is used. For Ray AIR, we use a TorchTrainer with 4 CPU workers. For vanilla PyTorch, we upload a training script and kick it off (using Ray tasks) in subprocesses on each node. In both cases, we collect the end to end runtime.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-13 10:53:24 +01:00

13 lines
392 B
YAML

base_image: {{ env["RAY_IMAGE_ML_NIGHTLY_GPU"] | default("anyscale/ray-ml:nightly-py37-gpu") }}
env_vars: {}
debian_packages:
- curl
python:
pip_packages:
- pytest
conda_packages: []
post_build_cmds:
- pip3 uninstall ray -y || true && pip3 install -U {{ env["RAY_WHEELS"] | default("ray") }}
- {{ env["RAY_WHEELS_SANITY_CHECK"] | default("echo No Ray wheels sanity check") }}