File size: 1,419 Bytes
a519604
 
a84245c
04a8ff0
a15627b
845ba75
 
 
 
 
a519604
59e9941
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a519604
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language: 
  - es
  - en
pipeline_tag: image-classification
widget:
- src: https://upserve.com/media/sites/2/Bill-from-Mezcalero-in-Washington-D.C.-photo-by-Alfredo-Solis-1-e1507226752437.jpg
  example_title: receipt
- src: https://templates.invoicehome.com/invoice-template-us-neat-750px.png
  example_title: invoice
---
**InvoiceReceiptClassifier** is a fine-tuned LayoutLMv2 model that classifies a document to an invoice or receipt.

## Quick start: using the raw model

```python
from transformers import (
    AutoModelForSequenceClassification,
    LayoutLMv2FeatureExtractor,
    LayoutLMv2Tokenizer,
    LayoutLMv2Processor,
)
model = AutoModelForSequenceClassification.from_pretrained("fedihch/InvoiceReceiptClassifier")
feature_extractor = LayoutLMv2FeatureExtractor()
tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(feature_extractor, tokenizer)
```
```python
from PIL import Image
input_img = Image.open("*****.jpg")
w, h = input_img.size
input_img = input_img.convert("RGB").resize((int(w * 600 / h), 600))
encoded_inputs = processor(input_img, return_tensors="pt")
for k, v in encoded_inputs.items():
    encoded_inputs[k] = v.to(model.device)
outputs = model(**encoded_inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
id2label = {0: "invoice", 1: "receipt"}
print(id2label[predicted_class_idx])
```