mirror of
https://github.com/vale981/ray
synced 2025-03-08 19:41:38 -05:00
![]() This PR ensures that the new trial resources set by `ResourceChangingScheduler` are respected by the train loop logic by modifying the scaling config to match. Previously, even though trials had their resources updated, the scaling config was not modified which lead to eg. new workers not being spawned in the `DataParallelTrainer` even though resources were available. In order to accomplish this, `ScalingConfigDataClass` is updated to allow equality comparisons with other `ScalingConfigDataClass`es (using the underlying PGF) and to create a `ScalingConfigDataClass` from a PGF. Please note that this is an internal only change intended to actually make `ResourceChangingScheduler` work. In the future, `ResourceChangingScheduler` should be updated to operate on `ScalingConfigDataClass`es instead of PGFs as it is now. That will require a deprecation cycle. |
||
---|---|---|
.. | ||
ray | ||
requirements | ||
asv.conf.json | ||
build-wheel-macos-arm64.sh | ||
build-wheel-macos.sh | ||
build-wheel-manylinux2014.sh | ||
build-wheel-windows.sh | ||
MANIFEST.in | ||
README-building-wheels.md | ||
requirements.txt | ||
requirements_linters.txt | ||
requirements_ml_docker.txt | ||
setup.py |