File size: 1,266 Bytes
e414372
 
658dd5c
 
929d407
5e9d147
93ff566
eeb3346
93ff566
 
 
e414372
 
766443f
e414372
 
658dd5c
766443f
 
 
b56e2a8
 
 
 
 
eb03c19
efde362
b56e2a8
 
fb0bc50
 
b56e2a8
94fd645
b56e2a8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
library_name: transformers
metrics:
- cer
widget:
- src: "https://i.ibb.co/QXZFSNx/test7.png"
  output:
    text: รมว.ธรรมนัส ลงพื้นที่
language:
- th
pipeline_tag: image-to-text
---

# thai_trocr_thaigov_v2

<!-- Provide a quick summary of what the model is/does. -->
Vision Encoder Decoder Models
- Use microsoft/trocr-base-handwritten as encoder.
- Use airesearch/wangchanberta-base-att-spm-uncased as decoder 
- Fine-tune on 250k synthetic text images dataset using [ThaiGov V2 Corpus](https://github.com/PyThaiNLP/thaigov-v2-corpus)
- Use [SynthTIGER](https://github.com/clovaai/synthtiger) to generate synthetic text image.
- It is useful to fine-tune any Thai OCR task.

# Usage

``` python
from PIL import Image
from transformers import TrOCRProcessor, VisionEncoderDecoderModel

processor = TrOCRProcessor.from_pretrained("kkatiz/thai-trocr-thaigov-v2")
model = VisionEncoderDecoderModel.from_pretrained("kkatiz/thai-trocr-thaigov-v2")

image = Image.open("... your image path").convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)

generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```