FuriouslyAsleep commited on
Commit
ab57781
1 Parent(s): 52584f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -1,13 +1,14 @@
1
- # MarkupLM Large fine-tuned on WebSRC to allow Question Answering
2
 
3
  **Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
4
 
5
- ## Introduction
6
 
7
  MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
8
 
9
  [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
10
 
 
11
  Fine-tuning args:
12
  --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
13
 
 
1
+ # MarkupLM Large fine-tuned on WebSRC to allow Question Answering. This model is adopted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments identified below under Fine-tuning args.) This version not endorsed by Microsoft.
2
 
3
  **Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
4
 
5
+ ## Introduction (From Microsoft Markuplm Large Model Card)
6
 
7
  MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
8
 
9
  [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
10
 
11
+
12
  Fine-tuning args:
13
  --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
14