metadata
thumbnail: https://huggingface.co/front/thumbnails/lladoc.png
language:
- en
license: cc-by-4.0
tags:
- transformers
datasets:
- idocvqa
metrics:
- accuracy
LLaDoc (Large Language and Document) model
This is a fine-tuned model of LLaVA1.5 (7B) on the iDocVQA dataset. It is intended to be used as a multimodal system. The dataset it's trained on is limited in scope, as it covers only certain domains.
The accuracy achieved on the validation set is 29.58%.
Please find the information about preprocessing, training and full details of the LLaVA model in the original link
The paper for this work is available on arXiv: https://arxiv.org/abs/2402.00453