File size: 2,066 Bytes
3cf660b
 
f95c278
 
 
 
3cf660b
f95c278
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
library_name: paddlenlp
license: apache-2.0
language:
- en
- zh
---

[![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP)

# PaddlePaddle/ernie-layoutx-base-uncased

## Introduction

Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding. 
However, most existing methods lack the systematic mining and utilization of layout-centered knowledge, leading to sub-optimal performances. 
In this paper, we propose ERNIE-Layout, a novel document pre-training solution with layout knowledge enhancement in the whole workflow, 
to learn better representations that combine the features from text, layout, and image. Specifically, we first rearrange input sequences 
in the serialization stage, and then present a correlative pre-training task, reading order prediction, to learn the proper reading order of documents. 
To improve the layout awareness of the model, we integrate a spatial-aware disentangled attention into the multi-modal transformer and 
a replaced regions prediction task into the pre-training phase. Experimental results show that ERNIE-Layout achieves superior performance 
on various downstream tasks, setting new state-of-the-art on key information extraction, document image classification, and document question answering datasets.

More detail: https://arxiv.org/abs/2210.06155

## Available Models

- ernie-layoutx-base-uncased

## How to Use?

Click on the *Use in paddlenlp* button on the top right!

## Citation Info

```text
@article{ernie2.0,
  title = {ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding},
  author = {Peng, Qiming and Pan, Yinxu and Wang, Wenjin and Luo, Bin and Zhang, Zhenyu and Huang, Zhengjie and Hu, Teng and Yin, Weichong and Chen, Yongfeng and Zhang, Yin and Feng, Shikun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2210.06155},
  year = {2022},
}
```