File size: 3,062 Bytes
10269d4
 
e9d4fec
 
 
 
 
2725474
e9d4fec
 
10269d4
e9d4fec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d8bc5b9
 
 
 
 
 
f606d72
 
d8bc5b9
 
 
 
 
3954fc9
e9d4fec
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: gpl-3.0
tags:
- DocVQA
- Document Question Answering
- Document Visual Question Answering
datasets:
- rubentito/mp-docvqa
language:
- en
---

# LayoutLMv3 base fine-tuned on MP-DocVQA

This is pretrained LayoutLMv3 from [Microsoft hub](https://huggingface.co/microsoft/layoutlmv3-base) and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.


This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
- Results on the MP-DocVQA dataset are reported in Table 2.
- Training hyperparameters can be found in Table 8 of Appendix D.


## How to use

Here is how to use this model to get the features of a given text in PyTorch:

```python
import torch
from transformers import LayoutLMv3Processor, LayoutLMv3ForQuestionAnswering

processor = LayoutLMv3Processor.from_pretrained("rubentito/layoutlmv3-base-mpdocvqa", apply_ocr=False)
model = LayoutLMv3ForQuestionAnswering.from_pretrained("rubentito/layoutlmv3-base-mpdocvqa")

image = Image.open("example.jpg").convert("RGB")
question = "Is this a question?"
context = ["Example"]
boxes = [0, 0, 1000, 1000]  # This is an example bounding box covering the whole image.
document_encoding = processor(image, question, context, boxes=boxes, return_tensors="pt")
outputs = model(**document_encoding)

# Get the answer
start_idx = torch.argmax(outputs.start_logits, axis=1)
end_idx = torch.argmax(outputs.end_logits, axis=1)
answers = self.processor.tokenizer.decode(input_tokens[start_idx: end_idx+1]).strip()
```

## Model results

Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
| Model 		 																	| HF name								| ANLS 			| APPA		|
|-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:---------:|
| [Bert large](https://huggingface.co/rubentito/bert-large-mpdocvqa)	| rubentito/bert-large-mpdocvqa			| 0.4183 		| 51.6177 	|
| [Longformer base](https://huggingface.co/rubentito/longformer-base-mpdocvqa)		| rubentito/longformer-base-mpdocvqa	| 0.5287		| 71.1696 	|
| [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa)	| rubentito/bigbird-base-itc-mpdocvqa	| 0.4929		| 67.5433 	|
| [**LayoutLMv3 base**](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa)		| rubentito/layoutlmv3-base-mpdocvqa	| 0.4538		| 51.9426 	|
| [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa)						| rubentito/t5-base-mpdocvqa			| 0.5050		| 0.0000 	|
| Hi-VT5 																			| TBA 									| 0.6201		| 79.23		|

## Citation Information 

```tex
@article{tito2022hierarchical,
  title={Hierarchical multimodal transformers for Multi-Page DocVQA},
  author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
  journal={arXiv preprint arXiv:2212.05935},
  year={2022}
}
```