Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- liuhaotian/LLaVA-Pretrain
|
4 |
+
pipeline_tag: visual-question-answering
|
5 |
+
---
|
6 |
+
|
7 |
+
<div align="center">
|
8 |
+
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
9 |
+
|
10 |
+
|
11 |
+
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner)
|
12 |
+
|
13 |
+
|
14 |
+
</div>
|
15 |
+
|
16 |
+
## Model
|
17 |
+
|
18 |
+
llava-v1.5-7b-xtuner-pretrain is a LLaVA projector pretrained from [Vicuna-v1.5-7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner).
|
19 |
+
|
20 |
+
The fine-tuned LLaVA model can be found on [xtuner/llava-v1.5-7b-xtuner](https://huggingface.co/xtuner/llava-v1.5-7b-xtuner).
|
21 |
+
|
22 |
+
## Citation
|
23 |
+
|
24 |
+
```bibtex
|
25 |
+
@misc{2023xtuner,
|
26 |
+
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
|
27 |
+
author={XTuner Contributors},
|
28 |
+
howpublished = {\url{https://github.com/InternLM/xtuner}},
|
29 |
+
year={2023}
|
30 |
+
}
|
31 |
+
```
|