Text Generation
Transformers
Safetensors
imp_qwen2
conversational
custom_code
Edit model card

😈 Imp

[Paper]  [Demo]  [Github]

Introduction

The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our Imp-v1.5-2B-Qwen1.5 is a strong lightweight LMM with only 2B parameters, which is build upon Qwen1.5-1.8B-Chat (1.8B) and a powerful visual encoder SigLIP (0.4B), and trained on on 1M mixed dataset.

As shown in the Table below, Imp-v1.5-2B-Qwen1.5 significantly outperforms the counterparts of similar model sizes.

We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our GitHub repo. We will persistently improve our model and release the next versions to further improve model performance :)

How to use

Install dependencies

pip install transformers # latest version is ok, but we recommend v4.36.0
pip install -q pillow accelerate einops

You can use the following code for model inference. The format of text instruction is similar to LLaVA. Note that the example can only be run on GPUs currently.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

torch.set_default_device("cuda")

#Create model
model = AutoModelForCausalLM.from_pretrained(
    "MILVLG/Imp-v1.5-2B-Qwen1.5", 
    torch_dtype=torch.float16, 
    device_map="auto",
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-2B-Qwen1.5", trust_remote_code=True)

#Set inputs
text = "<|im_start|>system\nA chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.<|im_end|>\n<|im_start|>user\n<image>\nWhat are the colors of the bus in the image?<|im_end|>\n<|im_start|>assistant"
image = Image.open("images/bus.jpg")

input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)

#Generate the answer
output_ids = model.generate(
    input_ids,
    max_new_tokens=100,
    images=image_tensor,
    use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

Model evaluation

We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.

Models Size VQAv2 GQA SQA(IMG) TextVQA POPE MME(P) MMB MMBCN MM-Vet
Mini-Gemini-2B 2B - - 56.2 - - 1341 59.8 - 31.1
Bunny-v1.0-2B-zh 2B 76.6 59.6 64.6 - 85.8 1300.8 59.1 58.5 -
Imp-v1.5-2B-Qwen1.5 2B 79.2 61.9 66.1 54.5 86.7 1304.8 63.8 61.3 33.5

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Citation

If you use our model or refer our work in your studies, please cite:

@article{imp2024,
  title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
  author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
  journal={arXiv preprint arXiv:2405.12107},
  year={2024}
}
Downloads last month
5
Safetensors
Model size
2.24B params
Tensor type
FP16
Β·
Inference API
Input a message to start chatting with MILVLG/Imp-v1.5-2B-Qwen1.5.
Inference API (serverless) does not yet support model repos that contain custom code.

Datasets used to train MILVLG/Imp-v1.5-2B-Qwen1.5

Collection including MILVLG/Imp-v1.5-2B-Qwen1.5