|
--- |
|
tags: |
|
- trocr |
|
- image-to-text |
|
widget: |
|
- src: https://raw.githubusercontent.com/ku21fan/STR-Fewer-Labels/main/demo_image/1.png |
|
example_title: Example 1 |
|
- src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/LSVT1.jpg |
|
example_title: Example 2 |
|
- src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/ArT2.jpg |
|
example_title: Example 3 |
|
--- |
|
|
|
# TrOCR (base-sized model, fine-tuned on STR benchmarks) |
|
|
|
TrOCR model fine-tuned on the training sets of IC13, IC15, IIIT5K, SVT. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr). |
|
|
|
## Model description |
|
|
|
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa. |
|
|
|
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens. |
|
|
|
## Intended uses & limitations |
|
|
|
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you. |
|
|
|
### How to use |
|
|
|
Here is how to use this model in PyTorch: |
|
|
|
```python |
|
from transformers import TrOCRProcessor, VisionEncoderDecoderModel |
|
from PIL import Image |
|
import requests |
|
|
|
# load image from the IIIT-5k dataset |
|
url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' |
|
image = Image.open(requests.get(url, stream=True).raw).convert("RGB") |
|
|
|
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-str') |
|
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-str') |
|
pixel_values = processor(images=image, return_tensors="pt").pixel_values |
|
|
|
generated_ids = model.generate(pixel_values) |
|
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
``` |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@misc{li2021trocr, |
|
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models}, |
|
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei}, |
|
year={2021}, |
|
eprint={2109.10282}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |