[docs/ci] Fix (some) broken linkchecks (#28087)

Signed-off-by: Kai Fricke <kai@anyscale.com>
This commit is contained in:
Kai Fricke 2022-08-25 04:41:35 -07:00 committed by GitHub
parent ec3c7f855e
commit e0725d1f1d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 7 additions and 4 deletions

View file

@ -362,7 +362,7 @@
"# -> throughput: 8.56GiB/s\n",
"```\n",
"\n",
"Note: The pipeline can also be submitted using [Ray Job Submission](https://docs.ray.io/en/latest/cluster/job-submission.html) ,\n",
"Note: The pipeline can also be submitted using [Ray Job Submission](https://docs.ray.io/en/latest/cluster/running-applications/job-submission/) ,\n",
"which is in beta starting with Ray 1.12. Try it out!"
]
}

View file

@ -61,7 +61,7 @@
"source": [
"We will use `ray.init()` to initialize a local cluster. By default, this cluster will be compromised of only the machine you are running this notebook on. You can also run this notebook on an Anyscale cluster.\n",
"\n",
"This notebook *will not* run in [Ray Client](https://docs.ray.io/en/latest/cluster/ray-client.html) mode."
"This notebook *will not* run in [Ray Client](https://docs.ray.io/en/latest/cluster/running-applications/job-submission/ray-client.html) mode."
]
},
{

View file

@ -784,7 +784,7 @@
"id": "OlzjlW8QR_q6"
},
"source": [
"We will use Ray Serve to serve the trained model. A core concept of Ray Serve is [Deployment](https://docs.ray.io/en/latest/serve/core-apis.html). It allows you to define and update your business logic or models that will handle incoming requests as well as how this is exposed over HTTP or in Python.\n",
"We will use Ray Serve to serve the trained model. A core concept of Ray Serve is [Deployment](https://docs.ray.io/en/latest/serve/getting_started.html#converting-to-a-ray-serve-deployment). It allows you to define and update your business logic or models that will handle incoming requests as well as how this is exposed over HTTP or in Python.\n",
"\n",
"In the case of serving model, `ray.serve.air_integrations.Predictor` and `ray.serve.air_integrations.PredictorDeployment` wrap a `ray.air.checkpoint.Checkpoint` into a Ray Serve deployment that can readily serve HTTP requests.\n",
"Note, ``Checkpoint`` captures both model and preprocessing steps in a way compatible with Ray Serve and ensures that ml workload can transition seamlessly between training and\n",

View file

@ -22,7 +22,10 @@ from ray.widgets import Template
logger = logging.getLogger(__name__)
CLIENT_DOCS_URL = "https://docs.ray.io/en/latest/cluster/ray-client.html"
CLIENT_DOCS_URL = (
"https://docs.ray.io/en/latest/cluster/running-applications/"
"job-submission/ray-client.html"
)
@dataclass