Enables better usage with GCP.
The default behavior is that the head runs with the ray-autoscaler-sa-v1 service Account, but workers do not. Workers can run with this service account by copying & uncommenting L114->L117 from example-full
Signed-off-by: Ian <ian.rodney@gmail.com>
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
This PR restores notes for migration from the legacy Ray operator to the new KubeRay operator.
To avoid disrupting the flow of the Ray documentation, these notes are placed in a README accompanying the old operator's code.
These notes are linked from the new docs.
Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
Page structure changes:
Deploying a Ray Cluster on Kubernetes
Getting Started -> links to jobs
Deploying a Ray Cluster on VMs
Getting started -> links to jobs
User Guides
Autoscaling (moved more content here in favor of the Getting started page)
Running Applications on Ray Clusters
Ray Jobs
Quickstart Using the Ray Jobs CLI
Python SDK
REST API
Ray Job Submission API Reference
Ray Client
Content changes:
modified "Deploying a Ray Cluster ..." quickstart pages to briefly summarize ad-hoc command execution, then link to jobs
modified Ray Jobs example to be more incremental - start with a simple example, then show long-running script, then show example with a runtime env, instead of all of them at once
center Ray Jobs quickstart around using the CLI. Made some minor changes to the Python SDK page to match it
remove "Ray Jobs Architecture"
moved "Autoscaling" content away from Kubernetes "Getting started" page into its own user guide. I think it's too complicated for "Getting Started". No content cuts.
Cut "Viewing the dashboard" and "Ray Client" from Kubernetes "Getting started" page.
Signed-off-by: Stephanie Wang <swang@cs.berkeley.edu>
Signed-off-by: Stephanie Wang swang@cs.berkeley.edu
Various cleanups around docs on Ray cluster "Monitoring and observability". After #27723, we will move these to a common page outside of VMs/k8s subsections:
Add links to the more comprehensive observability section.
Move and clean up cluster-specific content from Prometheus metrics to the new Ray Cluster page. I also modified a bunch of text here because previously we were not very clear about what the recommended approach was.
Include more specific instructions about setting up observability tools for VMs vs k8s.
This adds the structure described here, namely adding a new section under Ray Clusters which is focused on running applications on Ray clusters.
Signed-off-by: Cade Daniel <cade@anyscale.com>
Co-authored-by: Stephanie Wang <swang@cs.berkeley.edu>
*This PR:
Copies the existing clusters API reference to the new structure. The reference docs are split out into Ray Clusters (common between vms and k8s) and Ray Clusters on VMs (specific to vms). Notably, there is also a reference section for k8s, but not in this PR.
Move the three job submission user guides back into a single one. Jules had suggested that we break them out into rest/sdk/cli, but that's not P0 right now.
Fix some bugs in the left navigation bar. There should be less duplication of TOC entries. I'll keep working on related fixes in a different PR.
Signed-off-by: Cade Daniel <cade@anyscale.com>
This PR
Adds notes and example on logging for Ray/K8s.
Implements an API Reference paging pointing to the configuration guide and the RayCluster CR definition.
Takes managed K8s services out of the tabbed structure, to make that page look less sad.
Adds a comparison of the KubeRay operator and legacy K8s operator
Adds an architecture diagram for the autoscaling sections
Fixes some other minor items
Adds some info about networking to the configuration guide, removes the previously planned networking page
Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
Update autoscaler configuration docs for VM stack.
Removed the video, after looking at it it fits better in overview / and is possibly outdated
Co-authored-by: Eric Liang <ekhliang@gmail.com>
This PR adds a guide on RayCluster configuration and a page of discussion about autoscaling.
Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
* Save work
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* Update
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* consistency
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* update
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* fixes
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* simplify
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* update
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* fix
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* update
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* wording
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
* update
Signed-off-by: Philipp Moritz <pcmoritz@gmail.com>
This PR migrates the old Community Supported Cluster Launcher docs to the new Ray Clusters doc structure.
Signed-off-by: Cade Daniel <cade@anyscale.com>
Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
This PR
adds a page of guidance on GPU deployment with Ray/K8s. This page is a modified and slightly expanded version of the existing page https://docs.ray.io/en/latest/cluster/kubernetes-gpu.html
moves managed K8s service intro links to their own page
This PR puts the Ray Clusters (under construction) docs section (see #26754) under Ray Clusters as a subpage.
This makes the master branch docs clean and presentable for users
Ray Clusters doc writers can use existing CI to iterate on the docs, without having a massive PR once we're done.
Signed-off-by: Cade Daniel <cade@anyscale.com>
# Why are these changes needed?
The dashboard can display the message <actor> cannot be created because the Ray cluster cannot satisfy its resource requirements in the case where the runtime env setup is stalled. This PR updates this message to include the possibility of the runtime env setup failing.
This PR adds a tip to the Job Submission doc saying that if a job is stalled in PENDING, the runtime env setup may have stalled. It adds a pointer to the log files which should have more information.
The runtime env cannot stall forever, it fails after 10 minutes. This is a new feature added after the Ray 1.13 branch cut. In Ray <= 1.13, the runtime env can still stall forever.
# Related issue number
Closes#26332
ray.init() will currently start a new Ray instance even if one is already existing, which is very confusing if you are a new user trying to go from local development to a cluster. This PR changes it so that, when no address is specified, we first try to find an existing Ray cluster that was created through `ray start`. If none is found, we will start a new one.
This makes two changes to the ray.init() resolution order:
1. When `ray start` is called, the started cluster address was already written to a file called `/tmp/ray/ray_current_cluster`. For ray.init() and ray.init(address="auto"), we will first check this local file for an existing cluster address. The file is deleted on `ray stop`. If the file is empty, autodetect any running cluster (legacy behavior) if address="auto", or we will start a new local Ray instance if address=None.
2. When ray.init(address="local") is called, we will create a new local Ray instance, even if one is already existing. This behavior seems to be necessary mainly for `ray.client` use cases.
This also surfaces the logs about which Ray instance we are connecting to. Previously these were hidden because we didn't set up the log until after connecting to Ray. So now Ray will log one of the following messages during ray.init:
```
(Connecting to existing Ray cluster at address: <IP>...)
...connection...
(Started a local Ray cluster.| Connected to Ray Cluster.)( View the dashboard at <URL>)
```
Note that this changes the dashboard URL to be printed with `ray.init()` instead of when the dashboard is first started.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
This PR
Adds a warning about a known issue to the KubeRay section of the Ray docs.
Updates the description of the feature state of KubeRay integration.
Adds some links to the KubeRay docs.
Adds notes explaining that Ray's support on Azure, Aliyun, and SLURM is community-maintained.
Rephrases the mention of K8s support in the intro.
This PR replaces https://github.com/ray-project/ray/pull/25504.
- Closes#23874 by fixing a typo ("num_gpus" -> "num-gpus").
- Adds end-to-end test logic confirming the fix.
- Adds end-to-end test logic confirming autoscaling with custom resources works.
- Slightly refines developer instructions.
- Deflakes test logic a bit by allowing for the event that the head pod changes its identity as the Ray cluster starts up.
- Adds links to Job Submission from existing library tutorials where `ray submit` is used. When Jobs becomes GA, we should fully replace the uses of `ray submit` with Ray job submission and ensure this is tested.
- Adds docstrings for the Jobs SDK, which automatically show up in the API reference
- Improve the Job Submission main page
- Add a "Deployment Guide" landing page explaining when to use Ray Client vs Ray Jobs
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>