The concept of a Serve Application, a data structure containing all information needed to deploy Serve on a Ray cluster, has surfaced during recent design discussions. This change introduces a formal Application data structure and refactors existing code to use it.
This PR reduces the concurrency limit. Based on the back of envelope calculation, the current concurrency limit can easily exceed the service quota.
Given large == 2048 vCPUs, it will use about 20K vCPUs, which is slightly larger than the limit.
This PR exposes the new checkpoint interface, implemented in #22691, to end users. It does this by replacing the old external facing TrialCheckpoint class with a merged class that supports the old TrialCheckpoint API (upload, download, save) as well as the new Checkpoint API.
With this PR, users can use the new Checkpoint interface for downstream processing of their Ray Tune results. In a follow-up PR, the new Checkpoint interface will be used internally within Ray Tune and Train for bookkeeping, however, that is not required to unblock the Ray ML use case.
Horovod updated the attributes of DistributedTrainableCreator and args to create Horovod RayExecutor.
horovod/horovod@a729ba7
The major issue is Horovod deprecated "slot" concept, use "worker" instead, which is more consistent with Generic Ray worker. The issue is currently blocking Uber DL trainers to use raytune.
This commit updates the Horovod RayExecutor init args.
Co-authored-by: Kai Fricke <kai@anyscale.com>
This PR consists of the following clean-up items for KubeRay autoscaler integration:
Remove the docker/kuberay directory
Move the Python files formerly in docker/kuberay to the autoscaler directory.
Use a rayproject/ray image for the autoscaler.
Add an entry point for the kuberay autoscaler to scripts.py. Use the entry point in the example config.
Slightly simplify the code that starts the autoscaler.
Ray versions are updated to Ray 1.11.0, which will be officially released within the next couple of days.
By default, Ray >= 1.11.0 runs without Redis. References to Redis are removed from the example config.
Add the autoscaler configuration test to the CI.
Update development documentation to reflect the changes in this PR.
`test_deploy` has become [flakey](https://flakey-tests.ray.io/#) due to timeout. Since `test_deploy` is already a "large" test, this change splits it into two testing files instead of simply increasing the timeout.
This PR splits up the changes in #22393 and introduces an implementation of the ML Checkpoint interface used by Ray Tune.
This means, the TuneCheckpoint class implements the to/from_[bytes|dict|directory|object_ref|uri] conversion functions, as well as more high-level functions to transition between the different TuneCheckpoint classes. It also includes test cases for Tune's main conversion modes, i.e. dict - intermediate - dict and fs - intermediate - fs.
These changes will be the basis for refactoring the tune interface to use TuneCheckpoint objects instead of TrialCheckpoints (externally) and instead of paths/objects (internally).
The new buildkite pipeline prints out faulty results due to a confusion of -ge/-gt and -le/-lt in the retry script. This is a cosmetic error (so behavior was still correct) that is resolved with this PR.
* refactor resource data structure in gcs
* fix comment
* fix lint error
* fix
* DISABLED_TestRejectedRequestWorkerLeaseReply as it depends on the update of normal task
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>