Commit graph

35 commits

Author SHA1 Message Date
Kai Fricke
b91246a093
[air/benchmarks] Measure local training time in torch/tf benchmarks (#27902)
We currently measure end-to-end training time in our benchmarks, which includes setup overhead. This is an unequal comparison, as setup overhead for vanilla training cannot be accurately expressed and was instead just disregarded.
By comparing the raw training times in the actual training loop, we will get a more accurate expression of any potential overhead or benefit in using Ray vs. vanilla tensorflow/torch.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-08-16 19:16:08 +02:00
Kai Fricke
d527c7b335
[air/benchmarks] Drop OMP_NUM_THREADS in vanilla torch/tf training (#27256)
Ray automatically sets OMP_NUM_THREADS=1, potentially limiting multithreading in native pytorch/tensorflow. If this leads to performance differences, we should address this either in Ray Train or in Ray core.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-08-02 13:38:01 +01:00
xwjiang2010
c9579fea1c
[air] update pytorch_training_e2e.py to use iter_torch_batches. (#27241)
update pytorch_training_e2e.py to use iter_torch_batches.

Signed-off-by: xwjiang2010 <xwjiang2010@gmail.com>
2022-08-01 19:23:01 +01:00
Dmitri Gekhtman
8bdeb30510
[docs][ml][kuberay] Add a --disable-check flag to the XGBoost benchmark. (#27277)
This PR adds a flag --disable-check to the XGBoost benchmark script which disables the RuntimeError that comes up if training or prediction took too long. This is meant for non-CI exploratory use-cases.

Specifically, the reason is this:
We will include the XGBoost benchmark as an example workload for the KubeRay documentation.
The actual performance of the workload is highly sensitive to infrastructure environment, so we won't want to raise an alarming RuntimeError if the workload took too long on the user's infrastructure.
(When I tried the 100Gb benchmark on KubeRay, training ran just a couple of minutes longer than the 1000 second cutoff.)
2022-07-29 14:31:10 -07:00
Clark Zinzow
3730ec8cc9
[AIR - Datasets] Fix AIR release tests dealing with tensor columns. (#27221)
This PR fixes some AIR release tests that deal with tensor columns.
2022-07-28 14:34:11 -07:00
matthewdeng
0319dcd889
[air] fix xgboost_benchmark script by passing in args (#27146) 2022-07-27 19:08:15 -07:00
Amog Kamsetty
862d10c162
[AIR] Remove ML code from ray.util (#27005)
Removes all ML related code from `ray.util`

Removes:
- `ray.util.xgboost`
- `ray.util.lightgbm`
- `ray.util.horovod`
- `ray.util.ray_lightning`

Moves `ray.util.ml_utils` to other locations

Closes #23900

Signed-off-by: Amog Kamsetty <amogkamsetty@yahoo.com>
Signed-off-by: Kai Fricke <kai@anyscale.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-07-27 14:24:19 +01:00
xwjiang2010
4c30325172
[air] update xgboost test (catch test failures properly). (#27023)
- Update xgboost test (catch test failures properly)
- Remove `path` from `from_model` for XGBoostCheckpoint and LightGbmCheckpoint.

Signed-off-by: xwjiang2010 <xwjiang2010@gmail.com>
2022-07-27 12:18:51 +01:00
Balaji Veeramani
89f7f2a567
[Datasets] Add size parameter to ImageFolderDatasource (#26975)
If you read a folder with differently-sized images, `ImageFolderDatasource` errors. This PR fixes the issue by resizing images to a user-specified size.
2022-07-26 14:57:38 -07:00
matthewdeng
1bb7651e95
[air] add smoke-test flag to tensorflow_benchmark (#26999)
Increase ratio from 1.15 to 1.2

Signed-off-by: Matthew Deng <matt@anyscale.com>
2022-07-26 15:47:37 +01:00
Richard Liaw
96e8027c7e
[air] large tune/torch benchmark (#26763)
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-07-23 01:17:25 -07:00
Balaji Veeramani
ac1d21027d
[AIR] Add framework-specific checkpoints (#26777) 2022-07-20 19:33:27 -07:00
Kai Fricke
2e35d47bd2
[air/train/benchmark] Add TF GPU 4x4 benchmark (#26776) 2022-07-20 14:07:51 -07:00
matthewdeng
2a425b195c
[air] change default strategy to PACK (#26757) 2022-07-19 23:01:24 -07:00
xwjiang2010
75027eb479
[air/benchmarks] train/tune benchmark (#26564)
Making sure that tuning multiple trials in parallel is not significantly slower than training each individual trials.
Some overhead is expected.

Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Signed-off-by: Richard Liaw <rliaw@berkeley.edu>
Signed-off-by: Kai Fricke <kai@anyscale.com>

Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-07-19 18:24:39 +01:00
Richard Liaw
7e62e1187c
[air/benchmark] Torch benchmarks for 4x4 (#26692)
Add benchmark data for 4x4 GPU setup.

Signed-off-by: Richard Liaw <rliaw@berkeley.edu>

Co-authored-by: Jimmy Yao <jiahaoyao.math@gmail.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-07-19 17:06:37 +01:00
Sumanth Ratna
759966781f
[air] Allow users to use instances of ScalingConfig (#25712)
Co-authored-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-07-18 15:46:58 -07:00
Kai Fricke
00947fd949
[air/benchmarks] Add 4x1 GPU benchmark for Torch (#26562) 2022-07-18 12:14:10 -07:00
matthewdeng
6670708010
[air] add placement group max CPU to data benchmark (#26649)
Set experimental `_max_cpu_fraction_per_node` to prevent deadlock.

This should technically be a no-op with the SPREAD strategy.
2022-07-18 10:34:40 -07:00
Jiao
98a07920d3
[AIR][CUJ] Make distributing training benchmark at silver tier (#26640) 2022-07-17 22:07:09 -07:00
Jiao
77e2ef2eb6
[AIR] Update Torch benchmarks with documentation (#26631)
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-16 17:58:21 -07:00
Eric Liang
0855bcb77e
[air] Use SPREAD strategy by default and don't special case it in benchmarks (#26633) 2022-07-16 17:37:06 -07:00
Jiao
196e52ad7c
[AIR][CUJ] E2E Pytorch training (#26621) 2022-07-16 08:23:19 -07:00
Jiao
988ffd494b
[AIR][CUJ] Add GPU bench prediction benchmark (#26614) 2022-07-16 08:22:37 -07:00
matthewdeng
e3a096f412
[air] add bulk ingest benchmarks (#26618) 2022-07-15 22:01:23 -07:00
Richard Liaw
5ad4e75831
[air] Add initial benchmark section (#26608) 2022-07-15 15:33:48 -07:00
xwjiang2010
a241e6a0f5
[air] Add xgboost release test for silver tier(10-node case). (#26460)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2022-07-15 13:21:10 -07:00
Kai Fricke
213a96e239
[air/benchmarks] Add distributed Tensorflow benchmarks (CPU only) (#26519)
Following up from #26436, this PR adds a distributed benchmark test for Tensorflow FashionMNIST training. It compares training with Ray AIR with training with vanilla PyTorch.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-14 22:08:43 +01:00
Kai Fricke
cf75cf7232
[air] Add AIR distributed training benchmark for Torch FashionMNIST (#26436)
This PR adds a distributed benchmark test for Pytorch MNIST training. It compares training with Ray AIR with training with vanilla PyTorch.

In both cases, the same training loop is used. For Ray AIR, we use a TorchTrainer with 4 CPU workers. For vanilla PyTorch, we upload a training script and kick it off (using Ray tasks) in subprocesses on each node. In both cases, we collect the end to end runtime.

Signed-off-by: Kai Fricke <kai@anyscale.com>
2022-07-13 10:53:24 +01:00
Antoni Baum
ea94cda1f3
[AIR] Replace train. with session. (#26303)
This PR replaces legacy API calls to `train.` with AIR `session.` in Train code, examples and docs.

Depends on https://github.com/ray-project/ray/pull/25735
2022-07-07 16:29:04 -07:00
Amog Kamsetty
1316a2d05e
[AIR/Train] Move ray.air.train to ray.train (#25570) 2022-06-08 21:34:18 -07:00
Kai Fricke
4b9a89ad90
[air] Move python/ray/ml to python/ray/air (#25449)
The package "ml" should be renamed to "air".

Main question: Keep a `ml.py` with `from ray.air import *` for some level of backwards compatibility?
I'd go for no to force people to use the new structure.
2022-06-03 21:53:44 +01:00
SangBin Cho
ec653e3196
[Nightly test] Move two line downloads to one line. (#25061)
It fixes the mysterious error when all cluster env build is failing when pip uninstall / pip install is written in 2 lines. The root cause will be fixed later
2022-05-22 00:07:03 -07:00
Kai Fricke
6c5229295e
[ci/release] Support running tests with different python versions (#24843)
OSS release tests currently run with hardcoded Python 3.7 base. In the future we will want to run tests on different python versions. 
This PR adds support for a new `python` field in the test configuration. The python field will determine both the base image used in the Buildkite runner docker container (for Ray client compatibility) and the base image for the Anyscale cluster environments. 

Note that in Buildkite, we will still only wait for the python 3.7 base image before kicking off tests. That is acceptable, as we can assume that most wheels finish in a similar time, so even if we wait for the 3.7 image and kick off a 3.8 test, that runner will wait maybe for 5-10 more minutes.
2022-05-17 17:03:12 +01:00
Amog Kamsetty
a36e2a8f51
[Tune] Deprecate DistributedTrainableCreator (#24453)
Fully deprecate DistributedTrainableCreator for Ray 2.0

Closes #24453
2022-05-10 11:06:43 -07:00