philschmid HF staff commited on
Commit
a854397
1 Parent(s): 1aa9cb5

add custom handler

Browse files
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - sentence-embeddings
5
+ - endpoints-template
6
+ - optimum
7
+ library_name: generic
8
+ ---
9
+
10
+ # Optimized and Quantized [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) with a custom pipeline.py
11
+
12
+
13
+ This repository implements a `custom` task for `sentence-embeddings` for 🤗 Inference Endpoints for accelerated inference using [🤗 Optiumum](https://huggingface.co/docs/optimum/index). The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/all-MiniLM-L6-v2-optimum-embeddings/blob/main/pipeline.py).
14
+
15
+ Below is also describe how we converted & optimized the model, based on the [Accelerate Sentence Transformers with Hugging Face Optimum](https://www.philschmid.de/optimize-sentence-transformers) blog post. You can also check out the [notebook](https://huggingface.co/philschmid/all-MiniLM-L6-v2-optimum-embeddings/blob/main/convert.ipynb).
16
+
17
+ To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
18
+
19
+ ### expected Request payload
20
+
21
+ ```json
22
+ {
23
+ "inputs": "The sky is a blue today and not gray",
24
+ }
25
+ ```
26
+
27
+ below is an example on how to run a request using Python and `requests`.
28
+
29
+ ## Run Request
30
+
31
+ ```python
32
+ import json
33
+ from typing import List
34
+ import requests as r
35
+ import base64
36
+
37
+ ENDPOINT_URL = ""
38
+ HF_TOKEN = ""
39
+
40
+
41
+ def predict(document_string:str=None):
42
+
43
+ payload = {"inputs": document_string}
44
+ response = r.post(
45
+ ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
46
+ )
47
+ return response.json()
48
+
49
+
50
+ prediction = predict(
51
+ path_to_image="The sky is a blue today and not gray"
52
+ )
53
+ ```
54
+
55
+ expected output
56
+
57
+ ```python
58
+ {'embeddings': [[-0.021580450236797333,
59
+ 0.021715054288506508,
60
+ 0.00979710929095745,
61
+ -0.0005379787762649357,
62
+ 0.04682469740509987,
63
+ -0.013600599952042103,
64
+ ...
65
+ }
66
+ ```
67
+
68
+
69
+
70
+ ## How to create your own optimized and quantized model
71
+
72
+ Steps:
73
+ [1. Convert model to ONNX](#1-convert-model-to-onnx)
74
+ [2. Optimize & quantize model with Optimum](#2-optimize--quantize-model-with-optimum)
75
+ [3. Create Custom Handler for Inference Endpoints](#3-create-custom-handler-for-inference-endpoints)
76
+
77
+ Helpful links:
78
+ * [Accelerate Sentence Transformers with Hugging Face Optimum](https://www.philschmid.de/optimize-sentence-transformers)
79
+ * [Create Custom Handler Endpoints](https://link-to-docs)
80
+
81
+ ## Setup & Installation
82
+
83
+ ```python
84
+ %%writefile requirements.txt
85
+ optimum[onnxruntime]==1.3.0
86
+ mkl-include
87
+ mkl
88
+ ```
89
+
90
+ install requirements
91
+
92
+ ```python
93
+ !pip install -r requirements.txt
94
+ ```
95
+
96
+ ## 1. Convert model to ONNX
97
+
98
+
99
+ ```python
100
+ from optimum.onnxruntime import ORTModelForFeatureExtraction
101
+ from transformers import AutoTokenizer
102
+ from pathlib import Path
103
+
104
+
105
+ model_id="sentence-transformers/all-MiniLM-L6-v2"
106
+ onnx_path = Path(".")
107
+
108
+ # load vanilla transformers and convert to onnx
109
+ model = ORTModelForFeatureExtraction.from_pretrained(model_id, from_transformers=True)
110
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
111
+
112
+ # save onnx checkpoint and tokenizer
113
+ model.save_pretrained(onnx_path)
114
+ tokenizer.save_pretrained(onnx_path)
115
+ ```
116
+
117
+
118
+ ## 2. Optimize & quantize model with Optimum
119
+
120
+
121
+ ```python
122
+ from optimum.onnxruntime import ORTOptimizer, ORTQuantizer
123
+ from optimum.onnxruntime.configuration import OptimizationConfig, AutoQuantizationConfig
124
+
125
+ # create ORTOptimizer and define optimization configuration
126
+ optimizer = ORTOptimizer.from_pretrained(model_id, feature=model.pipeline_task)
127
+ optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations
128
+
129
+ # apply the optimization configuration to the model
130
+ optimizer.export(
131
+ onnx_model_path=onnx_path / "model.onnx",
132
+ onnx_optimized_model_output_path=onnx_path / "model-optimized.onnx",
133
+ optimization_config=optimization_config,
134
+ )
135
+
136
+
137
+ # create ORTQuantizer and define quantization configuration
138
+ dynamic_quantizer = ORTQuantizer.from_pretrained(model_id, feature=model.pipeline_task)
139
+ dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
140
+
141
+ # apply the quantization configuration to the model
142
+ model_quantized_path = dynamic_quantizer.export(
143
+ onnx_model_path=onnx_path / "model-optimized.onnx",
144
+ onnx_quantized_model_output_path=onnx_path / "model-quantized.onnx",
145
+ quantization_config=dqconfig,
146
+ )
147
+
148
+
149
+ ```
150
+
151
+ ## 3. Create Custom Handler for Inference Endpoints
152
+
153
+
154
+ ```python
155
+ %%writefile pipeline.py
156
+ from typing import Dict, List, Any
157
+ from optimum.onnxruntime import ORTModelForFeatureExtraction
158
+ from transformers import AutoTokenizer
159
+ import torch.nn.functional as F
160
+ import torch
161
+
162
+ # copied from the model card
163
+ def mean_pooling(model_output, attention_mask):
164
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
165
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
166
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
167
+
168
+
169
+ class PreTrainedPipeline():
170
+ def __init__(self, path=""):
171
+ # load the optimized model
172
+ self.model = ORTModelForFeatureExtraction.from_pretrained(path, file_name="model-quantized.onnx")
173
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
174
+
175
+ def __call__(self, data: Any) -> List[List[Dict[str, float]]]:
176
+ """
177
+ Args:
178
+ data (:obj:):
179
+ includes the input data and the parameters for the inference.
180
+ Return:
181
+ A :obj:`list`:. The list contains the embeddings of the inference inputs
182
+ """
183
+ inputs = data.get("inputs", data)
184
+
185
+ # tokenize the input
186
+ encoded_inputs = self.tokenizer(inputs, padding=True, truncation=True, return_tensors='pt')
187
+ # run the model
188
+ outputs = self.model(**encoded_inputs)
189
+ # Perform pooling
190
+ sentence_embeddings = mean_pooling(outputs, encoded_inputs['attention_mask'])
191
+ # Normalize embeddings
192
+ sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
193
+ # postprocess the prediction
194
+ return {"embeddings": sentence_embeddings.tolist()}
195
+ ```
196
+
197
+ test custom pipeline
198
+
199
+
200
+ ```python
201
+ from pipeline import PreTrainedPipeline
202
+
203
+ # init handler
204
+ my_handler = PreTrainedPipeline(path=".")
205
+
206
+ # prepare sample payload
207
+ request = {"inputs": "I am quite excited how this will turn out"}
208
+
209
+ # test the handler
210
+ %timeit my_handler(request)
211
+
212
+ ```
213
+
214
+ results
215
+
216
+ ```
217
+ 1.55 ms ± 2.04 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
218
+ ```
219
+
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "deepset/roberta-base-squad2",
3
+ "architectures": [
4
+ "RobertaForQuestionAnswering"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "language": "english",
17
+ "layer_norm_eps": 1e-05,
18
+ "max_position_embeddings": 514,
19
+ "name": "Roberta",
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "pad_token_id": 1,
23
+ "position_embedding_type": "absolute",
24
+ "transformers_version": "4.21.3",
25
+ "type_vocab_size": 1,
26
+ "use_cache": false,
27
+ "vocab_size": 50265
28
+ }
handler.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Any
2
+ from optimum.onnxruntime import ORTModelForQuestionAnswering
3
+ from transformers import AutoTokenizer, pipeline
4
+
5
+
6
+ class EndpointHandler():
7
+ def __init__(self, path=""):
8
+ # load the optimized model
9
+ self.model = ORTModelForQuestionAnswering.from_pretrained(path, file_name="model_optimized_quantized.onnx")
10
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
11
+ # create pipeline
12
+ self.pipeline = pipeline("question-answering", model=self.model, tokenizer=self.tokenizer)
13
+
14
+ def __call__(self, data: Any) -> List[List[Dict[str, float]]]:
15
+ """
16
+ Args:
17
+ data (:obj:):
18
+ includes the input data and the parameters for the inference.
19
+ Return:
20
+ A :obj:`list`:. The list contains the answer and scores of the inference inputs
21
+ """
22
+ inputs = data.get("inputs", data)
23
+ # run the model
24
+ prediction = self.pipeline(**inputs)
25
+ # return prediction
26
+ return prediction
merges.txt ADDED
The diff for this file is too large to render. See raw diff
model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:570afefbc8642150310e46c10a252dd091c8f44449e8a3a65a425f77991dc2ab
3
+ size 496337664
model_optimized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11c0577c4bb3afdb2a88e21807d5722511b6aa678d6d8275a7ba73c5cd8f88b1
3
+ size 496254364
model_optimized_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a27adda924cc0cd34fde41f606da51673ebefb7132a5518e41f1196ebc362f1
3
+ size 305175132
optimize_model.ipynb ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Convert & Optimize model with Optimum \n",
8
+ "\n",
9
+ "\n",
10
+ "Steps:\n",
11
+ "1. Convert model to ONNX\n",
12
+ "2. Optimize & quantize model with Optimum\n",
13
+ "3. Create Custom Handler for Inference Endpoints\n",
14
+ "4. Test Custom Handler Locally\n",
15
+ "5. Push to repository and create Inference Endpoint\n",
16
+ "\n",
17
+ "Helpful links:\n",
18
+ "* [Accelerate Transformers with Hugging Face Optimum](https://huggingface.co/blog/optimum-inference)\n",
19
+ "* [Optimizing Transformers for GPUs with Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu)\n",
20
+ "* [Optimum Documentation](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort)\n",
21
+ "* [Create Custom Handler Endpoints](https://link-to-docs)"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "metadata": {},
27
+ "source": [
28
+ "## Setup & Installation"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 1,
34
+ "metadata": {},
35
+ "outputs": [
36
+ {
37
+ "name": "stdout",
38
+ "output_type": "stream",
39
+ "text": [
40
+ "Writing requirements.txt\n"
41
+ ]
42
+ }
43
+ ],
44
+ "source": [
45
+ "%%writefile requirements.txt\n",
46
+ "optimum[onnxruntime]==1.4.0\n",
47
+ "mkl-include\n",
48
+ "mkl"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": null,
54
+ "metadata": {},
55
+ "outputs": [],
56
+ "source": [
57
+ "!pip install -r requirements.txt"
58
+ ]
59
+ },
60
+ {
61
+ "cell_type": "markdown",
62
+ "metadata": {},
63
+ "source": [
64
+ "## 0. Base line Performance\n"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "code",
69
+ "execution_count": null,
70
+ "metadata": {},
71
+ "outputs": [],
72
+ "source": [
73
+ "from transformers import pipeline\n",
74
+ "\n",
75
+ "qa = pipeline(\"question-answering\",model=\"deepset/roberta-base-squad2\")"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "markdown",
80
+ "metadata": {},
81
+ "source": [
82
+ "Okay, let's test the performance (latency) with sequence length of 128."
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "execution_count": 8,
88
+ "metadata": {},
89
+ "outputs": [],
90
+ "source": [
91
+ "context=\"Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value.\" \n",
92
+ "question=\"As what is Philipp working?\" \n",
93
+ "\n",
94
+ "payload = {\"inputs\": {\"question\": question, \"context\": context}}"
95
+ ]
96
+ },
97
+ {
98
+ "cell_type": "code",
99
+ "execution_count": 9,
100
+ "metadata": {},
101
+ "outputs": [
102
+ {
103
+ "name": "stdout",
104
+ "output_type": "stream",
105
+ "text": [
106
+ "Vanilla model Average latency (ms) - 64.15 +\\- 2.44\n"
107
+ ]
108
+ }
109
+ ],
110
+ "source": [
111
+ "from time import perf_counter\n",
112
+ "import numpy as np \n",
113
+ "\n",
114
+ "def measure_latency(pipe,payload):\n",
115
+ " latencies = []\n",
116
+ " # warm up\n",
117
+ " for _ in range(10):\n",
118
+ " _ = pipe(question=payload[\"inputs\"][\"question\"], context=payload[\"inputs\"][\"context\"])\n",
119
+ " # Timed run\n",
120
+ " for _ in range(50):\n",
121
+ " start_time = perf_counter()\n",
122
+ " _ = pipe(question=payload[\"inputs\"][\"question\"], context=payload[\"inputs\"][\"context\"])\n",
123
+ " latency = perf_counter() - start_time\n",
124
+ " latencies.append(latency)\n",
125
+ " # Compute run statistics\n",
126
+ " time_avg_ms = 1000 * np.mean(latencies)\n",
127
+ " time_std_ms = 1000 * np.std(latencies)\n",
128
+ " return f\"Average latency (ms) - {time_avg_ms:.2f} +\\- {time_std_ms:.2f}\"\n",
129
+ "\n",
130
+ "print(f\"Vanilla model {measure_latency(qa,payload)}\")"
131
+ ]
132
+ },
133
+ {
134
+ "cell_type": "markdown",
135
+ "metadata": {},
136
+ "source": [
137
+ "## 1. Convert model to ONNX"
138
+ ]
139
+ },
140
+ {
141
+ "cell_type": "code",
142
+ "execution_count": 10,
143
+ "metadata": {},
144
+ "outputs": [
145
+ {
146
+ "data": {
147
+ "application/vnd.jupyter.widget-view+json": {
148
+ "model_id": "df00c03d67b546bf8a3d1a327b9380f5",
149
+ "version_major": 2,
150
+ "version_minor": 0
151
+ },
152
+ "text/plain": [
153
+ "Downloading: 0%| | 0.00/571 [00:00<?, ?B/s]"
154
+ ]
155
+ },
156
+ "metadata": {},
157
+ "output_type": "display_data"
158
+ },
159
+ {
160
+ "data": {
161
+ "text/plain": [
162
+ "('./tokenizer_config.json',\n",
163
+ " './special_tokens_map.json',\n",
164
+ " './vocab.json',\n",
165
+ " './merges.txt',\n",
166
+ " './added_tokens.json',\n",
167
+ " './tokenizer.json')"
168
+ ]
169
+ },
170
+ "execution_count": 10,
171
+ "metadata": {},
172
+ "output_type": "execute_result"
173
+ }
174
+ ],
175
+ "source": [
176
+ "from optimum.onnxruntime import ORTModelForQuestionAnswering\n",
177
+ "from transformers import AutoTokenizer\n",
178
+ "from pathlib import Path\n",
179
+ "\n",
180
+ "\n",
181
+ "model_id=\"deepset/roberta-base-squad2\"\n",
182
+ "onnx_path = Path(\".\")\n",
183
+ "\n",
184
+ "# load vanilla transformers and convert to onnx\n",
185
+ "model = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True)\n",
186
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
187
+ "\n",
188
+ "# save onnx checkpoint and tokenizer\n",
189
+ "model.save_pretrained(onnx_path)\n",
190
+ "tokenizer.save_pretrained(onnx_path)"
191
+ ]
192
+ },
193
+ {
194
+ "cell_type": "markdown",
195
+ "metadata": {},
196
+ "source": [
197
+ "## 2. Optimize & quantize model with Optimum"
198
+ ]
199
+ },
200
+ {
201
+ "cell_type": "code",
202
+ "execution_count": 11,
203
+ "metadata": {},
204
+ "outputs": [
205
+ {
206
+ "name": "stderr",
207
+ "output_type": "stream",
208
+ "text": [
209
+ "2022-09-12 18:47:03.240390005 [W:onnxruntime:, inference_session.cc:1488 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in.\n"
210
+ ]
211
+ },
212
+ {
213
+ "data": {
214
+ "text/plain": [
215
+ "PosixPath('.')"
216
+ ]
217
+ },
218
+ "execution_count": 11,
219
+ "metadata": {},
220
+ "output_type": "execute_result"
221
+ }
222
+ ],
223
+ "source": [
224
+ "from optimum.onnxruntime import ORTOptimizer, ORTQuantizer\n",
225
+ "from optimum.onnxruntime.configuration import OptimizationConfig, AutoQuantizationConfig\n",
226
+ "\n",
227
+ "# Create the optimizer\n",
228
+ "optimizer = ORTOptimizer.from_pretrained(model)\n",
229
+ "\n",
230
+ "# Define the optimization strategy by creating the appropriate configuration\n",
231
+ "optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations\n",
232
+ "\n",
233
+ "# Optimize the model\n",
234
+ "optimizer.optimize(save_dir=onnx_path, optimization_config=optimization_config)"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": 12,
240
+ "metadata": {},
241
+ "outputs": [],
242
+ "source": [
243
+ "# create ORTQuantizer and define quantization configuration\n",
244
+ "dynamic_quantizer = ORTQuantizer.from_pretrained(onnx_path, file_name=\"model_optimized.onnx\")\n",
245
+ "dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)\n",
246
+ "\n",
247
+ "# apply the quantization configuration to the model\n",
248
+ "model_quantized_path = dynamic_quantizer.quantize(\n",
249
+ " save_dir=onnx_path,\n",
250
+ " quantization_config=dqconfig,\n",
251
+ ")\n"
252
+ ]
253
+ },
254
+ {
255
+ "cell_type": "markdown",
256
+ "metadata": {},
257
+ "source": [
258
+ "## 3. Create Custom Handler for Inference Endpoints\n"
259
+ ]
260
+ },
261
+ {
262
+ "cell_type": "code",
263
+ "execution_count": 1,
264
+ "metadata": {},
265
+ "outputs": [
266
+ {
267
+ "name": "stdout",
268
+ "output_type": "stream",
269
+ "text": [
270
+ "Overwriting handler.py\n"
271
+ ]
272
+ }
273
+ ],
274
+ "source": [
275
+ "%%writefile handler.py\n",
276
+ "from typing import Dict, List, Any\n",
277
+ "from optimum.onnxruntime import ORTModelForQuestionAnswering\n",
278
+ "from transformers import AutoTokenizer, pipeline\n",
279
+ "\n",
280
+ "\n",
281
+ "class EndpointHandler():\n",
282
+ " def __init__(self, path=\"\"):\n",
283
+ " # load the optimized model\n",
284
+ " self.model = ORTModelForQuestionAnswering.from_pretrained(path, file_name=\"model_optimized_quantized.onnx\")\n",
285
+ " self.tokenizer = AutoTokenizer.from_pretrained(path)\n",
286
+ " # create pipeline\n",
287
+ " self.pipeline = pipeline(\"question-answering\", model=self.model, tokenizer=self.tokenizer)\n",
288
+ "\n",
289
+ " def __call__(self, data: Any) -> List[List[Dict[str, float]]]:\n",
290
+ " \"\"\"\n",
291
+ " Args:\n",
292
+ " data (:obj:):\n",
293
+ " includes the input data and the parameters for the inference.\n",
294
+ " Return:\n",
295
+ " A :obj:`list`:. The list contains the answer and scores of the inference inputs\n",
296
+ " \"\"\"\n",
297
+ " inputs = data.get(\"inputs\", data)\n",
298
+ " # run the model\n",
299
+ " prediction = self.pipeline(**inputs)\n",
300
+ " # return prediction\n",
301
+ " return prediction"
302
+ ]
303
+ },
304
+ {
305
+ "cell_type": "markdown",
306
+ "metadata": {},
307
+ "source": [
308
+ "## 4. Test Custom Handler Locally\n"
309
+ ]
310
+ },
311
+ {
312
+ "cell_type": "code",
313
+ "execution_count": 2,
314
+ "metadata": {},
315
+ "outputs": [
316
+ {
317
+ "data": {
318
+ "text/plain": [
319
+ "{'score': 0.4749588668346405,\n",
320
+ " 'start': 88,\n",
321
+ " 'end': 102,\n",
322
+ " 'answer': 'Technical Lead'}"
323
+ ]
324
+ },
325
+ "execution_count": 2,
326
+ "metadata": {},
327
+ "output_type": "execute_result"
328
+ }
329
+ ],
330
+ "source": [
331
+ "from handler import EndpointHandler\n",
332
+ "\n",
333
+ "# init handler\n",
334
+ "my_handler = EndpointHandler(path=\".\")\n",
335
+ "\n",
336
+ "# prepare sample payload\n",
337
+ "context=\"Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value.\" \n",
338
+ "question=\"As what is Philipp working?\" \n",
339
+ "\n",
340
+ "payload = {\"inputs\": {\"question\": question, \"context\": context}}\n",
341
+ "\n",
342
+ "# test the handler\n",
343
+ "my_handler(payload)"
344
+ ]
345
+ },
346
+ {
347
+ "cell_type": "code",
348
+ "execution_count": 5,
349
+ "metadata": {},
350
+ "outputs": [
351
+ {
352
+ "name": "stdout",
353
+ "output_type": "stream",
354
+ "text": [
355
+ "Optimized & Quantized model Average latency (ms) - 29.90 +\\- 0.53\n"
356
+ ]
357
+ }
358
+ ],
359
+ "source": [
360
+ "from time import perf_counter\n",
361
+ "import numpy as np \n",
362
+ "\n",
363
+ "def measure_latency(handler,payload):\n",
364
+ " latencies = []\n",
365
+ " # warm up\n",
366
+ " for _ in range(10):\n",
367
+ " _ = handler(payload)\n",
368
+ " # Timed run\n",
369
+ " for _ in range(50):\n",
370
+ " start_time = perf_counter()\n",
371
+ " _ = handler(payload)\n",
372
+ " latency = perf_counter() - start_time\n",
373
+ " latencies.append(latency)\n",
374
+ " # Compute run statistics\n",
375
+ " time_avg_ms = 1000 * np.mean(latencies)\n",
376
+ " time_std_ms = 1000 * np.std(latencies)\n",
377
+ " return f\"Average latency (ms) - {time_avg_ms:.2f} +\\- {time_std_ms:.2f}\"\n",
378
+ "\n",
379
+ "print(f\"Optimized & Quantized model {measure_latency(my_handler,payload)}\")"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "markdown",
384
+ "metadata": {},
385
+ "source": [
386
+ "`Vanilla model Average latency (ms) - 64.15 +\\- 2.44`"
387
+ ]
388
+ },
389
+ {
390
+ "cell_type": "markdown",
391
+ "metadata": {},
392
+ "source": [
393
+ "## 5. Push to repository and create Inference Endpoint\n"
394
+ ]
395
+ },
396
+ {
397
+ "cell_type": "code",
398
+ "execution_count": null,
399
+ "metadata": {},
400
+ "outputs": [],
401
+ "source": [
402
+ "# add all our new files\n",
403
+ "!git add * \n",
404
+ "# commit our files\n",
405
+ "!git commit -m \"add custom handler\"\n",
406
+ "# push the files to the hub\n",
407
+ "!git push"
408
+ ]
409
+ }
410
+ ],
411
+ "metadata": {
412
+ "kernelspec": {
413
+ "display_name": "Python 3.9.12 ('az': conda)",
414
+ "language": "python",
415
+ "name": "python3"
416
+ },
417
+ "language_info": {
418
+ "codemirror_mode": {
419
+ "name": "ipython",
420
+ "version": 3
421
+ },
422
+ "file_extension": ".py",
423
+ "mimetype": "text/x-python",
424
+ "name": "python",
425
+ "nbconvert_exporter": "python",
426
+ "pygments_lexer": "ipython3",
427
+ "version": "3.9.12"
428
+ },
429
+ "orig_nbformat": 4,
430
+ "vscode": {
431
+ "interpreter": {
432
+ "hash": "bddb99ecda5b40a820d97bf37f3ff3a89fb9dbcf726ae84d28624ac628a665b4"
433
+ }
434
+ }
435
+ },
436
+ "nbformat": 4,
437
+ "nbformat_minor": 2
438
+ }
ort_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "opset": null,
3
+ "optimization": {},
4
+ "optimum_version": "1.4.0",
5
+ "quantization": {
6
+ "activations_dtype": "QUInt8",
7
+ "activations_symmetric": false,
8
+ "format": "QOperator",
9
+ "is_static": false,
10
+ "mode": "IntegerOps",
11
+ "nodes_to_exclude": [],
12
+ "nodes_to_quantize": [],
13
+ "operators_to_quantize": [
14
+ "MatMul",
15
+ "Add"
16
+ ],
17
+ "per_channel": false,
18
+ "qdq_add_pair_to_weight": false,
19
+ "qdq_dedicated_pair": false,
20
+ "qdq_op_type_per_channel_support_to_axis": {
21
+ "MatMul": 1
22
+ },
23
+ "reduce_range": false,
24
+ "weights_dtype": "QInt8",
25
+ "weights_symmetric": true
26
+ },
27
+ "transformers_version": "4.21.3",
28
+ "use_external_data_format": false
29
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ optimum[onnxruntime]==1.4.0
2
+ mkl-include
3
+ mkl
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "cls_token": {
12
+ "__type": "AddedToken",
13
+ "content": "<s>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "do_lower_case": false,
20
+ "eos_token": {
21
+ "__type": "AddedToken",
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": true,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "errors": "replace",
29
+ "full_tokenizer_file": null,
30
+ "mask_token": {
31
+ "__type": "AddedToken",
32
+ "content": "<mask>",
33
+ "lstrip": true,
34
+ "normalized": true,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ "model_max_length": 512,
39
+ "name_or_path": "deepset/roberta-base-squad2",
40
+ "pad_token": {
41
+ "__type": "AddedToken",
42
+ "content": "<pad>",
43
+ "lstrip": false,
44
+ "normalized": true,
45
+ "rstrip": false,
46
+ "single_word": false
47
+ },
48
+ "sep_token": {
49
+ "__type": "AddedToken",
50
+ "content": "</s>",
51
+ "lstrip": false,
52
+ "normalized": true,
53
+ "rstrip": false,
54
+ "single_word": false
55
+ },
56
+ "special_tokens_map_file": "/home/ubuntu/.cache/huggingface/transformers/c9d2c178fac8d40234baa1833a3b1903d393729bf93ea34da247c07db24900d0.cb2244924ab24d706b02fd7fcedaea4531566537687a539ebb94db511fd122a0",
57
+ "tokenizer_class": "RobertaTokenizer",
58
+ "trim_offsets": true,
59
+ "unk_token": {
60
+ "__type": "AddedToken",
61
+ "content": "<unk>",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false
66
+ }
67
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff