Back to all models

Unable to determine this model’s pipeline type. Check the docs .

Monthly model downloads

microsoft/layoutlm-large-uncased microsoft/layoutlm-large-uncased
1,559 downloads
last 30 days

pytorch

tf

Contributed by

Microsoft company
15 team members · 31 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-large-uncased") model = AutoModel.from_pretrained("microsoft/layoutlm-large-uncased")

LayoutLM

Model description

LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:

LayoutLM: Pre-training of Text and Layout for Document Image Understanding Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020

Training data

We pre-train LayoutLM on IIT-CDIP Test Collection 1.0* dataset with two settings.

  • LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters
  • LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters (This Model)

Citation

If you find LayoutLM useful in your research, please cite the following paper:

@misc{xu2019layoutlm,
    title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
    author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
    year={2019},
    eprint={1912.13318},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}