cooleel commited on
Commit
e39ea97
1 Parent(s): a402d55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -1,3 +1,34 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ The xfund dataset with annotations at the word level.
6
+
7
+
8
+ The original XFUND dataset
9
+ see more detail at [this](https://github.com/doc-analysis/XFUND)
10
+
11
+
12
+ #### Citation Information
13
+ ``` latex
14
+ @inproceedings{xu-etal-2022-xfund,
15
+ title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
16
+ author = "Xu, Yiheng and
17
+ Lv, Tengchao and
18
+ Cui, Lei and
19
+ Wang, Guoxin and
20
+ Lu, Yijuan and
21
+ Florencio, Dinei and
22
+ Zhang, Cha and
23
+ Wei, Furu",
24
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
25
+ month = may,
26
+ year = "2022",
27
+ address = "Dublin, Ireland",
28
+ publisher = "Association for Computational Linguistics",
29
+ url = "https://aclanthology.org/2022.findings-acl.253",
30
+ doi = "10.18653/v1/2022.findings-acl.253",
31
+ pages = "3214--3224",
32
+ abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
33
+ }
34
+ ```