Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- generated_from_trainer
|
4 |
+
- endpoints-template
|
5 |
+
library_name: generic
|
6 |
+
datasets:
|
7 |
+
- funsd
|
8 |
+
model-index:
|
9 |
+
- name: layoutlm-funsd
|
10 |
+
results: []
|
11 |
+
---
|
12 |
+
|
13 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
+
should probably proofread and complete it, then remove this comment. -->
|
15 |
+
|
16 |
+
# layoutlm-funsd
|
17 |
+
|
18 |
+
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
|
19 |
+
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 1.0045
|
21 |
+
- Answer: {'precision': 0.7348314606741573, 'recall': 0.8084054388133498, 'f1': 0.7698646262507357, 'number': 809}
|
22 |
+
- Header: {'precision': 0.44285714285714284, 'recall': 0.5210084033613446, 'f1': 0.47876447876447875, 'number': 119}
|
23 |
+
- Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065}
|
24 |
+
- Overall Precision: 0.7599
|
25 |
+
- Overall Recall: 0.8083
|
26 |
+
- Overall F1: 0.7866
|
27 |
+
- Overall Accuracy: 0.8106
|
28 |
+
|
29 |
+
### Training hyperparameters
|
30 |
+
|
31 |
+
The following hyperparameters were used during training:
|
32 |
+
- learning_rate: 3e-05
|
33 |
+
- train_batch_size: 16
|
34 |
+
- eval_batch_size: 8
|
35 |
+
- seed: 42
|
36 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
37 |
+
- lr_scheduler_type: linear
|
38 |
+
- num_epochs: 15
|
39 |
+
- mixed_precision_training: Native AMP
|
40 |
+
|
41 |
+
## Deploy Model with Inference Endpoints
|
42 |
+
|
43 |
+
Before we can get started, make sure you meet all of the following requirements:
|
44 |
+
|
45 |
+
1. An Organization/User with an active plan and *WRITE* access to the model repository.
|
46 |
+
2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
|
47 |
+
|
48 |
+
|
49 |
+
|
50 |
+
### 1. Deploy LayoutLM and Send requests
|
51 |
+
|
52 |
+
In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
|
53 |
+
|
54 |
+
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
|
55 |
+
|
56 |
+
We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py`
|
57 |
+
|
58 |
+
```python
|
59 |
+
from typing import Dict, List, Any
|
60 |
+
from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor
|
61 |
+
import torch
|
62 |
+
from subprocess import run
|
63 |
+
|
64 |
+
# install tesseract-ocr and pytesseract
|
65 |
+
run("apt install -y tesseract-ocr", shell=True, check=True)
|
66 |
+
run("pip install pytesseract", shell=True, check=True)
|
67 |
+
|
68 |
+
# helper function to unnormalize bboxes for drawing onto the image
|
69 |
+
def unnormalize_box(bbox, width, height):
|
70 |
+
return [
|
71 |
+
width * (bbox[0] / 1000),
|
72 |
+
height * (bbox[1] / 1000),
|
73 |
+
width * (bbox[2] / 1000),
|
74 |
+
height * (bbox[3] / 1000),
|
75 |
+
]
|
76 |
+
|
77 |
+
# set device
|
78 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
79 |
+
|
80 |
+
class EndpointHandler:
|
81 |
+
def __init__(self, path=""):
|
82 |
+
# load model and processor from path
|
83 |
+
self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device)
|
84 |
+
self.processor = LayoutLMv2Processor.from_pretrained(path)
|
85 |
+
|
86 |
+
def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]:
|
87 |
+
"""
|
88 |
+
Args:
|
89 |
+
data (:obj:):
|
90 |
+
includes the deserialized image file as PIL.Image
|
91 |
+
"""
|
92 |
+
# process input
|
93 |
+
image = data.pop("inputs", data)
|
94 |
+
|
95 |
+
# process image
|
96 |
+
encoding = self.processor(image, return_tensors="pt")
|
97 |
+
|
98 |
+
# run prediction
|
99 |
+
with torch.inference_mode():
|
100 |
+
outputs = self.model(
|
101 |
+
input_ids=encoding.input_ids.to(device),
|
102 |
+
bbox=encoding.bbox.to(device),
|
103 |
+
attention_mask=encoding.attention_mask.to(device),
|
104 |
+
token_type_ids=encoding.token_type_ids.to(device),
|
105 |
+
)
|
106 |
+
predictions = outputs.logits.softmax(-1)
|
107 |
+
|
108 |
+
# post process output
|
109 |
+
result = []
|
110 |
+
for item, inp_ids, bbox in zip(
|
111 |
+
predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu()
|
112 |
+
):
|
113 |
+
label = self.model.config.id2label[int(item.argmax().cpu())]
|
114 |
+
if label == "O":
|
115 |
+
continue
|
116 |
+
score = item.max().item()
|
117 |
+
text = self.processor.tokenizer.decode(inp_ids)
|
118 |
+
bbox = unnormalize_box(bbox.tolist(), image.width, image.height)
|
119 |
+
result.append({"label": label, "score": score, "text": text, "bbox": bbox})
|
120 |
+
return {"predictions": result}
|
121 |
+
```
|
122 |
+
|
123 |
+
### 2. Send HTTP request using Python
|
124 |
+
|
125 |
+
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`)
|
126 |
+
|
127 |
+
```python
|
128 |
+
import json
|
129 |
+
import requests as r
|
130 |
+
import mimetypes
|
131 |
+
|
132 |
+
ENDPOINT_URL="" # url of your endpoint
|
133 |
+
HF_TOKEN="" # organization token where you deployed your endpoint
|
134 |
+
|
135 |
+
def predict(path_to_image:str=None):
|
136 |
+
with open(path_to_image, "rb") as i:
|
137 |
+
b = i.read()
|
138 |
+
headers= {
|
139 |
+
"Authorization": f"Bearer {HF_TOKEN}",
|
140 |
+
"Content-Type": mimetypes.guess_type(path_to_image)[0]
|
141 |
+
}
|
142 |
+
response = r.post(ENDPOINT_URL, headers=headers, data=b)
|
143 |
+
return response.json()
|
144 |
+
|
145 |
+
prediction = predict(path_to_image="path_to_your_image.png")
|
146 |
+
|
147 |
+
print(prediction)
|
148 |
+
# {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]},
|
149 |
+
```
|
150 |
+
|
151 |
+
|
152 |
+
### 3. Draw result on image
|
153 |
+
|
154 |
+
To get a better understanding of what the model predicted you can also draw the predictions on the provided image.
|
155 |
+
|
156 |
+
```python
|
157 |
+
from PIL import Image, ImageDraw, ImageFont
|
158 |
+
|
159 |
+
# draw results on image
|
160 |
+
def draw_result(path_to_image,result):
|
161 |
+
image = Image.open(path_to_image)
|
162 |
+
label2color = {
|
163 |
+
"B-HEADER": "blue",
|
164 |
+
"B-QUESTION": "red",
|
165 |
+
"B-ANSWER": "green",
|
166 |
+
"I-HEADER": "blue",
|
167 |
+
"I-QUESTION": "red",
|
168 |
+
"I-ANSWER": "green",
|
169 |
+
}
|
170 |
+
|
171 |
+
# draw predictions over the image
|
172 |
+
draw = ImageDraw.Draw(image)
|
173 |
+
font = ImageFont.load_default()
|
174 |
+
for res in result:
|
175 |
+
draw.rectangle(res["bbox"], outline="black")
|
176 |
+
draw.rectangle(res["bbox"], outline=label2color[res["label"]])
|
177 |
+
draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font)
|
178 |
+
return image
|
179 |
+
|
180 |
+
draw_result("path_to_your_image.png", prediction["predictions"])
|
181 |
+
```
|