mirror of
https://github.com/vale981/ray
synced 2025-03-06 02:21:39 -05:00
![]() In test_many_tasks.py case, we usually found the case failing and found the reason. We sleep for sleep_time seconds to wait all tasks to be finished, but the computation of actual sleep time is done by 0.1 * #rounds, where 0.1 is the sleep time every round. It looks perfect but one factor was missed, and that's the computation time elapsed. In this case, it is the time consumed by cur_cpus = ray.available_resources().get("CPU", 0) min_cpus_available = min(min_cpus_available, cur_cpus) especially the ray.available_resources() took a quite time when the cluster is large. (in our case it took beyond 1s with 1500 nodes). The situation we thought it would be: for _ in range(sleep_time / 0.1): sleep(0.1) The actual situation happens: for _ in range(sleep_time / 0.1): do_something(); # it costs time, sometimes pretty much sleep(0.1) We don't know why ray.available_resources() is slow and if it's logical, but we can add a time checker to make the sleep time precise. |
||
---|---|---|
.. | ||
benchmarks | ||
golden_notebook_tests | ||
horovod_tests | ||
jobs_tests | ||
kubernetes_manual_tests | ||
lightgbm_tests | ||
long_running_distributed_tests | ||
long_running_tests | ||
microbenchmark | ||
ml_user_tests | ||
nightly_tests | ||
ray_release | ||
release_logs | ||
rllib_tests | ||
runtime_env_tests | ||
serve_tests | ||
sgd_tests/sgd_gpu | ||
train_tests/horovod | ||
tune_tests | ||
util | ||
xgboost_tests | ||
__init__.py | ||
BUILD | ||
README.md | ||
release_tests.yaml | ||
requirements.txt | ||
requirements_buildkite.txt | ||
run_release_test.sh | ||
setup.py |
Release Tests
While the exact release process relies on Anyscale internal tooling, the tests we run during the releases are located at https://github.com/ray-project/ray/blob/master/release/.buildkite/build_pipeline.py