mirror of
https://github.com/vale981/ray
synced 2025-03-05 10:01:43 -05:00
[docs/ci] Fix (some) broken linkchecks (#28087)
Signed-off-by: Kai Fricke <kai@anyscale.com>
This commit is contained in:
parent
ec3c7f855e
commit
e0725d1f1d
4 changed files with 7 additions and 4 deletions
|
@ -362,7 +362,7 @@
|
||||||
"# -> throughput: 8.56GiB/s\n",
|
"# -> throughput: 8.56GiB/s\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note: The pipeline can also be submitted using [Ray Job Submission](https://docs.ray.io/en/latest/cluster/job-submission.html) ,\n",
|
"Note: The pipeline can also be submitted using [Ray Job Submission](https://docs.ray.io/en/latest/cluster/running-applications/job-submission/) ,\n",
|
||||||
"which is in beta starting with Ray 1.12. Try it out!"
|
"which is in beta starting with Ray 1.12. Try it out!"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
|
@ -61,7 +61,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"We will use `ray.init()` to initialize a local cluster. By default, this cluster will be compromised of only the machine you are running this notebook on. You can also run this notebook on an Anyscale cluster.\n",
|
"We will use `ray.init()` to initialize a local cluster. By default, this cluster will be compromised of only the machine you are running this notebook on. You can also run this notebook on an Anyscale cluster.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook *will not* run in [Ray Client](https://docs.ray.io/en/latest/cluster/ray-client.html) mode."
|
"This notebook *will not* run in [Ray Client](https://docs.ray.io/en/latest/cluster/running-applications/job-submission/ray-client.html) mode."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
@ -784,7 +784,7 @@
|
||||||
"id": "OlzjlW8QR_q6"
|
"id": "OlzjlW8QR_q6"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"We will use Ray Serve to serve the trained model. A core concept of Ray Serve is [Deployment](https://docs.ray.io/en/latest/serve/core-apis.html). It allows you to define and update your business logic or models that will handle incoming requests as well as how this is exposed over HTTP or in Python.\n",
|
"We will use Ray Serve to serve the trained model. A core concept of Ray Serve is [Deployment](https://docs.ray.io/en/latest/serve/getting_started.html#converting-to-a-ray-serve-deployment). It allows you to define and update your business logic or models that will handle incoming requests as well as how this is exposed over HTTP or in Python.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the case of serving model, `ray.serve.air_integrations.Predictor` and `ray.serve.air_integrations.PredictorDeployment` wrap a `ray.air.checkpoint.Checkpoint` into a Ray Serve deployment that can readily serve HTTP requests.\n",
|
"In the case of serving model, `ray.serve.air_integrations.Predictor` and `ray.serve.air_integrations.PredictorDeployment` wrap a `ray.air.checkpoint.Checkpoint` into a Ray Serve deployment that can readily serve HTTP requests.\n",
|
||||||
"Note, ``Checkpoint`` captures both model and preprocessing steps in a way compatible with Ray Serve and ensures that ml workload can transition seamlessly between training and\n",
|
"Note, ``Checkpoint`` captures both model and preprocessing steps in a way compatible with Ray Serve and ensures that ml workload can transition seamlessly between training and\n",
|
||||||
|
|
|
@ -22,7 +22,10 @@ from ray.widgets import Template
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
CLIENT_DOCS_URL = "https://docs.ray.io/en/latest/cluster/ray-client.html"
|
CLIENT_DOCS_URL = (
|
||||||
|
"https://docs.ray.io/en/latest/cluster/running-applications/"
|
||||||
|
"job-submission/ray-client.html"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
|
Loading…
Add table
Reference in a new issue