You can also use [Ray Cluster Managers](cluster/deploy) to run Ray on your existing
[Kubernetes](cluster/kubernetes),
[YARN](cluster/yarn),
or [Slurm](cluster/slurm) clusters.
+++
```{link-button} cluster/quickstart
:type: ref
:text: Get Started
:classes: btn-outline-info btn-block
```
````
## What is Ray?
Ray is an open-source project developed at UC Berkeley RISE Lab.
As a general-purpose and universal distributed compute framework, you can flexibly run any compute-intensive Python workload — from distributed training or hyperparameter tuning to deep reinforcement learning and production model serving.
- Ray Core provides a simple, universal API for building distributed applications.
- Ray's native libraries and tools enable you to run complex ML applications with Ray.
- You can deploy these applications on any of the major cloud providers, including AWS, GCP, and Azure, or run them on your own servers.
- Ray also has a growing [ecosystem of community integrations](ray-overview/ray-libraries), including [Dask](https://docs.ray.io/en/latest/data/dask-on-ray.html), [MARS](https://docs.ray.io/en/latest/data/mars-on-ray.html), [Modin](https://github.com/modin-project/modin), [Horovod](https://horovod.readthedocs.io/en/stable/ray_include.html), [Hugging Face](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.hyperparameter_search), [Scikit-learn](ray-more-libs/joblib), [and others](ray-more-libs/index).
The following figure gives you an overview of the Ray ecosystem.
Ray is more than a framework for distributed applications but also an active community of developers, researchers, and folks that love machine learning.
Here's a list of tips for getting involved with the Ray community:
```{include} _includes/_contribute.md
```
If you're interested in contributing to Ray, check out our [contributing guide](ray-contribute/getting-involved)
to read about the contribution process and see what you can work on.