File size: 2,342 Bytes
11144ee 4c7f7ab f436cfb 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee 4c7f7ab 11144ee cb5cacc 11144ee 4c7f7ab 11144ee 4c7f7ab b2e547a 4c7f7ab 11144ee b2e547a 4c7f7ab b2e547a 4c7f7ab 11144ee 4c7f7ab 11144ee e534ea1 4c7f7ab 11144ee 4c7f7ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: mit
tags:
- vision
inference: false
pipeline_tag: image-text-to-text
---
# UDOP model
The UDOP model was proposed in [Unifying Vision, Text, and Layout for Universal Document Processing](https://arxiv.org/abs/2212.02623) by Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal.
## Model description
UDOP adopts an encoder-decoder Transformer architecture based on T5 for document AI tasks like document image classification, document parsing and document visual question answering.
## Intended uses & limitations
You can use the model for document image classification, document parsing and document visual question answering (DocVQA).
### How to use
Here's how to use the model on a document image:
```python
from transformers import AutoProcessor, UdopForConditionalGeneration
from datasets import load_dataset
# load model and processor
# in this case, we already have performed OCR ourselves
# so we initialize the processor with `apply_ocr=False`
processor = AutoProcessor.from_pretrained("microsoft/udop-large", apply_ocr=False)
model = UdopForConditionalGeneration.from_pretrained("microsoft/udop-large")
# load an example image, along with the words and coordinates
# which were extracted using an OCR engine
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
question = "Question answering. What is the date on the form?"
# prepare everything for the model
encoding = processor(image, question, words, boxes=boxes, return_tensors="pt")
# autoregressive generation
predicted_ids = model.generate(**encoding)
print(processor.batch_decode(predicted_ids, skip_special_tokens=True)[0])
9/30/92
```
Refer to the [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UDOP) for fine-tuning/inference.
### BibTeX entry and citation info
```bibtex
@misc{tang2023unifying,
title={Unifying Vision, Text, and Layout for Universal Document Processing},
author={Zineng Tang and Ziyi Yang and Guoxin Wang and Yuwei Fang and Yang Liu and Chenguang Zhu and Michael Zeng and Cha Zhang and Mohit Bansal},
year={2023},
eprint={2212.02623},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |