LZHgrla's picture
Create README.md
b01a6e1 verified
---
datasets:
- Lin-Chen/ShareGPT4V
pipeline_tag: visual-question-answering
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner)
</div>
## Model
llava-phi-3-mini-pretrain is a LLaVA projector pretrained from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/share-captioner_coco_lcs_sam_1246k_1107.json) dataset by [XTuner](https://github.com/InternLM/xtuner).
The fine-tuned LLaVA model can be found on [xtuner/llava-phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```