2022-02-22 11:06:37 -08:00
# Deploying with KubeRay (experimental)
2022-01-19 19:42:17 -08:00
```{admonition} What is Kuberay?
2022-02-22 11:06:37 -08:00
[KubeRay ](https://github.com/ray-project/kuberay ) is a set of tools for running Ray on Kubernetes.
2022-01-19 19:42:17 -08:00
It has been used by some larger corporations to deploy Ray on their infrastructure.
Going forward, we would like to make this way of deployment accessible and seamless for
2022-02-22 11:06:37 -08:00
all Ray users and standardize Ray deployment on Kubernetes around KubeRay's operator.
2022-01-19 19:42:17 -08:00
Presently you should consider this integration a minimal viable product that is not polished
enough for general use and prefer the [Kubernetes integration ](kubernetes.rst ) for running
2022-02-22 11:06:37 -08:00
Ray on Kubernetes. If you are brave enough to try the KubeRay integration out, this documentation
2022-01-19 19:42:17 -08:00
is for you! We would love your feedback as a [Github issue ](https://github.com/ray-project/ray/issues )
2022-02-22 11:06:37 -08:00
including `[KubeRay]` in the title.
2022-01-19 19:42:17 -08:00
```
2022-02-22 11:06:37 -08:00
Here we describe how you can deploy a Ray cluster on KubeRay. The following instructions are for
2022-01-19 19:42:17 -08:00
Minikube but the deployment works the same way on a real Kubernetes cluster. You need to have at
least 4 CPUs to run this example. First we make sure Minikube is initialized with
```shell
minikube start
```
2022-02-22 11:06:37 -08:00
Now you can deploy the KubeRay operator using
2022-01-19 19:42:17 -08:00
```shell
./ray/python/ray/autoscaler/kuberay/init-config.sh
kubectl apply -k "ray/python/ray/autoscaler/kuberay/config/default"
2022-02-22 11:06:37 -08:00
kubectl apply -f "ray/python/ray/autoscaler/kuberay/kuberay-autoscaler-rbac.yaml"
2022-01-19 19:42:17 -08:00
```
You can verify that the operator has been deployed using
```shell
kubectl -n ray-system get pods
```
Now let's deploy a new Ray cluster:
```shell
kubectl create -f ray/python/ray/autoscaler/kuberay/ray-cluster.complete.yaml
```
## Using the autoscaler
Let's now try out the autoscaler. We can run the following command to get a
Python interpreter in the head pod:
```shell
kubectl exec `kubectl get pods -o custom-columns=POD:metadata.name | grep raycluster-complete-head` -it -c ray-head -- python
```
In the Python interpreter, run the following snippet to scale up the cluster:
```python
import ray.autoscaler.sdk
ray.init("auto")
ray.autoscaler.sdk.request_resources(num_cpus=4)
```
2022-02-22 11:06:37 -08:00
## Uninstalling the KubeRay operator
2022-01-19 19:42:17 -08:00
2022-02-22 11:06:37 -08:00
You can uninstall the KubeRay operator using
2022-01-19 19:42:17 -08:00
```shell
kubectl delete -f "ray/python/ray/autoscaler/kuberay/kuberay-autoscaler.yaml"
kubectl delete -k "ray/python/ray/autoscaler/kuberay/config/default"
```
Note that all running Ray clusters will automatically be terminated.
2022-02-22 11:06:37 -08:00
## Developing the KubeRay integration (advanced)
2022-01-19 19:42:17 -08:00
2022-02-22 11:06:37 -08:00
If you also want to change the underlying KubeRay operator, please refer to the instructions
in [the KubeRay development documentation ](https://github.com/ray-project/kuberay/blob/master/ray-operator/DEVELOPMENT.md ). In that case you should push the modified operator to your docker account or registry and
2022-01-19 19:42:17 -08:00
follow the instructions in `ray/python/ray/autoscaler/kuberay/init-config.sh` .
The remainder of the instructions will cover how to change the autoscaler code.
In order to maximize development iteration speed, we recommend using a Linux machine with Python 3.7 for
development, since that will simplify building wheels incrementally.
Make the desired modification to Ray and/or the autoscaler and build the Ray wheels by running
the following command in the `ray/python` directory:
```shell
python setup.py bdist_wheel
```
Then in the `ray/docker/kuberay-autoscaler` directory run:
```shell
cp ../../python/dist/ray-2.0.0.dev0-cp37-cp37m-linux_x86_64.whl ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl
2022-03-01 09:09:16 +08:00
docker build --build-arg WHEEL_PATH="ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl" -t rayproject/kuberay-autoscaler -f Dockerfile.dev --no-cache .
2022-01-19 19:42:17 -08:00
docker push rayproject/kuberay-autoscaler
```
where you replace `rayproject/kuberay-autoscaler` with the desired image path in your own docker account (normally
`<username>/kuberay-autoscaler` ). Please also make sure to update the image in `ray-cluster.complete.yaml` .
2022-03-01 09:09:16 +08:00
If you don't make any changes to Ray autoscaler but only touch files under `docker/kuberay-autoscaler` or just want to catch up latest ray, you can skip building the wheel and build autoscaler directly.
```
docker build -t rayproject/kuberay-autoscaler --no-cache .
docker push rayproject/kuberay-autoscaler
```