Update links. (#28269)

Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>

This PR updates the quickstart configuration in the Ray docs to reflect the fixes from
ray-project/kuberay#529

To provide access to the fixed version, we update the link to point to KubeRay master rather than the 0.3.0 branch.
After the next KubeRay release (0.4.0), we can update these links to point to a fixed release version again.
This commit is contained in:
Dmitri Gekhtman 2022-09-02 12:18:04 -07:00 committed by GitHub
parent 9cf5df2c81
commit 59be31d558
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 3 additions and 3 deletions

View file

@ -74,7 +74,7 @@
"metadata": {},
"source": [
"To run the example in this guide, make sure your Kubernetes cluster (or local Kind cluster) can accomodate\n",
"additional resource requests of 3 CPU and 2Gi memory. \n",
"additional resource requests of 3 CPU and 3Gi memory. \n",
"\n",
"(kuberay-operator-deploy)=\n",
"## Deploying the KubeRay operator\n",
@ -157,7 +157,7 @@
"outputs": [],
"source": [
"# Deploy a sample Ray Cluster CR from the KubeRay repo:\n",
"! kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/release-0.3/ray-operator/config/samples/ray-cluster.autoscaler.yaml\n",
"! kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/master/ray-operator/config/samples/ray-cluster.autoscaler.yaml\n",
"\n",
"# This Ray cluster is named `raycluster-autoscaler` because it has optional Ray Autoscaler support enabled."
]

View file

@ -49,7 +49,7 @@ First, follow the [quickstart guide](kuberay-quickstart) to create an autoscalin
# Create the KubeRay operator.
$ kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=v0.3.0&timeout=90s"
# Create an autoscaling Ray cluster.
$ kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/release-0.3/ray-operator/config/samples/ray-cluster.autoscaler.yaml
$ kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/master/ray-operator/config/samples/ray-cluster.autoscaler.yaml
```
Now, we can run a Ray program on the head pod that uses [``request_resources``](ref-autoscaler-sdk) to scale the cluster to a total of 3 CPUs. The head and worker pods in our [example cluster config](https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-cluster.autoscaler.yaml) each have a capacity of 1 CPU, and we specified a minimum of 1 worker pod. Thus, the request should trigger upscaling of one additional worker pod.