update readme
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ datasets:
|
|
16 |
|
17 |
## Introduction
|
18 |
|
19 |
-
The Imp project aims to provide a family of a strong multimodal `small` language models (MSLMs). Our `Imp-v1.5-3B-Phi2` is a strong MSLM with only **3B** parameters, which is build upon a small yet powerful SLM [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on
|
20 |
|
21 |
As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
|
22 |
|
@@ -89,10 +89,10 @@ This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou D
|
|
89 |
If you use our model or refer our work in your studies, please cite:
|
90 |
|
91 |
```bibtex
|
92 |
-
@
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
}
|
98 |
```
|
|
|
16 |
|
17 |
## Introduction
|
18 |
|
19 |
+
The Imp project aims to provide a family of a strong multimodal `small` language models (MSLMs). Our `Imp-v1.5-3B-Phi2` is a strong MSLM with only **3B** parameters, which is build upon a small yet powerful SLM [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
|
20 |
|
21 |
As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
|
22 |
|
|
|
89 |
If you use our model or refer our work in your studies, please cite:
|
90 |
|
91 |
```bibtex
|
92 |
+
@article{imp2024,
|
93 |
+
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
|
94 |
+
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Lihao, Zheng and Zhenbiao, Gai and Mingyang, Wang and Jiajun, Ding},
|
95 |
+
journal={arXiv preprint arXiv:2405.12107},
|
96 |
+
year={2024}
|
97 |
}
|
98 |
```
|