File size: 7,548 Bytes
76d9c4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
title: Quickstart
version: EN
---
This document provides a quickstart guide of VESSL Serve - managing revisions and the gateway using YAML manifests.
## 1. Prepare a Model to Serve
Prepare the model and service for deployment. In this document, we will use the [MNIST example](https://github.com/vessl-ai/examples/blob/main/mnist/README.md) where you can train a model and register it to the VESSL Model Registry.
Use the following command in the CLI to proceed:
```sh
# Clone the example repository
git clone git@github.com:vessl-ai/examples.git
cd examples/mnist/pytorch
# Train the model and register it to the repository
pip install -r requirements.txt
python main.py --output-path ./output --save-model
# Register the model
python model.py --checkpoint ./output/model.pt --model-repository mnist-example
```
<Note>For more detailed information about the VESSL Model Registry, please refer to the [Model Registry](../model-registry/README.md) section.</Note>
## 2. Create a Serving Instance
Create a serving instance for deployment. Navigate to the 'Serving' section in the VESSL Web Console and click the 'New Serving' button. This will allow you to create a serving named `mnist-example`.
<img style={{ borderRadius: '0.5rem' }}
src="/images/serve/quickstart/1_new.png"
/>
<img style={{ borderRadius: '0.5rem' }}
src="/images/serve/quickstart/2_create.png"
/>
3. Write manifest file for serving revision
Create a new serving revision. Save the following content as a file named `serve-revision.yaml`:
```yaml
message: VESSL Serve example
image: quay.io/vessl-ai/kernels:py38-202308150329
resources:
name: v1.cpu-2.mem-6
run: vessl model serve mnist-example 1 --install-reqs
autoscaling:
min: 1
max: 3
metric: cpu
target: 60
ports:
- port: 8000
name: fastapi
type: http
```
You can easily deploy the revision defined in YAML using the VESSL CLI as shown below:
```sh
vessl serve revision create --serving mnist-example -f serve-revision.yaml
```
<img style={{ borderRadius: '0.5rem' }}
src="/images/serve/quickstart/3_revision.png"
/>
Refer to the [YAML schema reference](./serve-yaml-workflow/yaml-schema-reference.md) for detailed information on the YAML manifest schema.
<Warning>Ensure that you specify a container image with the same Python version as used during model creation. For instance, if you trained the model with Python 3.8, it's recommended to use an image containing Python 3.8, such as `quay.io/vessl-ai/kernels:py38-202308150329.`</Warning>
## 3. Create an Endpoint
To perform inference with the created revision, it's necessary to expose it to the external network. in VESSL Seriving, Gateway(Endpoint) determines how traffic is routed and distributed to which port.
Firstly, create a YAML file defining Gateway. Create a file named `serve-gateway.yaml` with the following content:
```yaml
enabled: true
targets:
- number: 1 # Use the revision number you got in previous step
port: 8000
weight: 100
```
The Gateway can be easily deployed using the VESSL CLI, as shown below:
```sh
vessl serve gateway update --serving mnist-example -f serve-gateway.yaml
```
To check the status of the deployed Gateway, use the vessl serve gateway show command.
```sh
vessl serve gateway show --serving mnist-example
```
You can check the status of the deployed Gateway as shown below:
```
Enabled True
Status success
Endpoint model-service-gateway-xyzyxyxx.managed-cluster-apne2.vessl.ai
Ingress Class nginx
Annotations (empty)
Traffic Targets
- ########## 100%: 22 (port 8000)
```
## 4. Dividing Traffic Among Multiple Revisions
To deploy a new version of the model without interrupting the service, a process is required where the new version is deployed first, followed by a gradual transition of traffic.
In VESSL Serve, the Gateway (Endpoint) provides the capability to distribute traffic across multiple Revisions.
Begin by defining and deploying the new Revision.
```yaml
message: Revision v2
image: quay.io/vessl-ai/kernels:py38-202308150329
resources:
name: v1.cpu-2.mem-6
run: vessl model serve mnist-example 2 --install-reqs # New model version
autoscaling:
min: 1
max: 3
metric: cpu
target: 60
ports:
- port: 8000
name: fastapi
type: http
```
```sh
vessl serve revision create --serving mnist-example -f serve-revision.yaml
```
```
Successfully created revision in serving mnist-example.
Number 2
Status pending
Message Revision v2
```
Subsequently, modify the `serve-gateway.yaml` to split traffic to the new Revision.
```yaml
enabled: true
targets:
- number: 1
port: 8000
weight: 90
- number: 2
port: 8000
weight: 10
```
Update the Gateway configuration with the provided settings:
```sh
vessl serve gateway update --serving mnist-example -f gateway.yaml
```
Executing this command will display the Gateway's status, revealing the distribution of traffic across the specified Revisions.
```
Successfully update gateway of serving mnist-example.
Enabled True
Status success
Endpoint model-service-gateway-xyzyxyxx.managed-cluster-apne2.vessl.ai
Ingress Class nginx
Annotations (empty)
Traffic Targets
- # 10 %: 1 (port 8000)
- ######### 90 %: 2 (port 8000)
```
## 5. Helpful Tips for Using VESSL Serve
### Simultaneously Update Revisions and Endpoint Configurations
After defining a Revision using YAML, you can create the revision and launch the gateway simultaneously by providing parameters directly in the CLI. Here's an example of the CLI command:
```bash
vessl serve revision create --serving serve-example -f serve-exmple.yaml \
--update-gateway --enable-gateway-if-off --update-gateway-port 8000 --update-gateway-weight 100
```
By using the `--update-gateway` option, you can update the gateway (endpoint) simultaneously while creating a revision.
The following options can be used in conjunction:
* `--enable-gateway-if-off`: This option changes the gateway's status to "enabled" if it's currently disabled.
* `--update-gateway-port`: Specify the port to be used by the newly created revision. This should be used in conjunction with -update-gateway-weight below.
* `--update-gateway-weight`: Define how traffic should be distributed to the newly created revision. This should be used alongside the -update-gateway-weight option mentioned above.
### Troubleshooting
* `NotFound (404): Requested entity not found`. error while creating Revisions or Gateways via CLI:
* Use the `vessl whoami` command to confirm if the default organization matches the one where Serving exists.
* You can use the `vessl configure --reset` command to change the default organization.
* Ensure that Serving is properly created within the selected default organization.
* What's the difference between Gateway and Endpoint?
* There is no difference between the two terms; they refer to the same concept.
* To prevent confusion, these terms will be unified under "Endpoint" in the future.
* HPA Scale-in/Scale-out Approach:
* Currently, VESSL Serve operates based on Kubernetes' Horizontal Pod Autoscaler (HPA) and uses its algorithms as is. For detailed information, refer to the [Kubernetes documentation](https://kubernetes.io/ko/docs/tasks/run-application/horizontal-pod-autoscale/).
* As an example of how it works based on CPU metrics:
* Desired replicas = ceil[current replicas * ( current CPU metric value / desired CPU metric value )]
* The HPA constantly monitors this metric and adjusts the current replicas within the [min, max] range.
|