# MarkupLM Large fine-tuned on WebSRC to allow Question Answering. This model is adopted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments identified below under Fine-tuning args.) This version not endorsed by Microsoft. **Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)** ## Introduction (From Microsoft Markuplm Large Model Card) MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei Fine-tuning args: --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4 Training was performed on only a small subset of the WebSRC: The number of total websites is 60 The train websites list is ['ga09'] The test websites list is [] The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08'] The number of processed websites is 60