File size: 4,500 Bytes
5a92362 827c02d 5a92362 1496057 5a92362 470ad57 5a92362 5f3a7bd 5a92362 cfa11ef 5a92362 1496057 5a92362 04b5feb 827c02d 04b5feb 5f3a7bd 5a92362 93e3896 5d76321 93e3896 5a92362 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
license: apache-2.0
pipeline_tag: text-generation
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
---
# 😈 Imp
\[[Paper](https://arxiv.org/abs/2405.12107)\] [[Demo](https://xmbot.net/imp/)\] [[Github](https://github.com/MILVLG/imp)\]
## Introduction
The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-3B-Phi2` is a strong lightweight LMMs with only **3B** parameters, which is build upon [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
## How to use
**Install dependencies**
```bash
pip install transformers # latest version is ok, but we recommend v4.37.0
pip install -q pillow accelerate einops
```
You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA). Note that the example can only be run on GPUs currently.
```Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
torch.set_default_device("cuda")
#Create model
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/Imp-v1.5-3B-Phi2/",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-3B-Phi2", trust_remote_code=True)
#Set inputs
text = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat are the colors of the bus in the image? ASSISTANT:"
image = Image.open("images/bus.jpg")
input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)
#Generate the answer
output_ids = model.generate(
input_ids,
max_new_tokens=100,
images=image_tensor,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
## Model evaluation
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
| Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
|:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
| [LLaVA-v1.5-lora](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 7B |79.1 | 63.0| 68.4 |58.2| 86.4 | 1476.9 | 66.1 |- |30.2|
| [TinyGPT-V-3B](https://huggingface.co/Tyrannosaurus/TinyGPT-V) | 3B | - | 38.9 | - | - | -| - | - |- |-|
| [LaVA-Phi-3B](https://github.com/zhuyiche/llava-phi) | 3B | 71.4 | - | 68.4 | 48.6 | 85.0 | 1335.1 | 59.8 |-|28.9|
| [MobileVLM-3B](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - | 59.0 | 61.0 | 47.5 | 84.9 | 1288.9 | 59.6 |- |-|
| [MiniCPM-V-3B](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - |- | - | - | - | 1452.0 | 67.9 | **65.3**|-|
| [Bunny-3B](https://huggingface.co/visheratin/MC-LLaVA-3b) | 3B | 79.8 | 62.5 | 70.9 | - | 86.8| **1488.8** | 68.6 |- |-|
| **Imp-v1.5-3B-Phi2** | 3B | **81.2** | **63.5** | **72.8**| **59.8** | **88.9**| 1446.4 | **72.9**| 46.7 |**43.3**|
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
## Citation
If you use our model or refer our work in your studies, please cite:
```bibtex
@article{imp2024,
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
journal={arXiv preprint arXiv:2405.12107},
year={2024}
}
``` |