nielsr HF staff commited on
Commit
6e818cb
1 Parent(s): facb27d

Add link to docs

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -6,6 +6,10 @@ license: cc-by-nc-sa-4.0
6
  # LayoutXLM
7
  **Multimodal (text + layout/format + image) pre-training for document AI**
8
 
 
 
 
 
9
  [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm)
10
  ## Introduction
11
  LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
 
6
  # LayoutXLM
7
  **Multimodal (text + layout/format + image) pre-training for document AI**
8
 
9
+ LayoutXLM is a multilingual variant of LayoutLMv2.
10
+
11
+ The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutxlm).
12
+
13
  [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm)
14
  ## Introduction
15
  LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.