mirror of
https://github.com/vale981/ray
synced 2025-03-12 06:06:39 -04:00

This change adds introductory deployment graph documentation. Links to updated documentation: * [Model Composition](https://ray--26860.org.readthedocs.build/en/26860/serve/model_composition.html) * [Examples Overview](https://ray--26860.org.readthedocs.build/en/26860/serve/tutorials/index.html) * [Deployment Graph Pattern Overview](https://ray--26860.org.readthedocs.build/en/26860/serve/tutorials/deployment-graph-patterns.html) * [Pattern: Linear Pipeline](https://ray--26860.org.readthedocs.build/en/26860/serve/tutorials/deployment-graph-patterns/linear_pipeline.html) * [Pattern: Branching Input](https://ray--26860.org.readthedocs.build/en/26860/serve/tutorials/deployment-graph-patterns/branching_input.html) * [Pattern: Conditional](https://ray--26860.org.readthedocs.build/en/26860/serve/tutorials/deployment-graph-patterns/conditional.html) Co-authored-by: Archit Kulkarni <architkulkarni@users.noreply.github.com>
1.1 KiB
1.1 KiB
(deployment-graph-pattern-branching-input)=
Pattern: Branching Input
This deployment graph pattern lets you pass the same input to multiple deployments in parallel. You can then aggregate these deployments' intermediate outputs in another deployment.
Code
:language: python
:start-after: __graph_start__
:end-before: __graph_end__
Execution
This graph includes two Model
nodes, with weights
of 0 and 1. It passes the input into the two Models
, and they add their own weights to it. Then, it uses the combine
deployment to add the two Model
deployments' outputs together.
The resulting calculation is:
input = 1
output1 = input + weight_1 = 0 + 1 = 1
output2 = input + weight_2 = 1 + 1 = 2
combine_output = output1 + output2 = 1 + 2 = 3
The final output is 3:
$ python branching_input.py
3