|
--- |
|
license: llama3 |
|
pipeline_tag: image-text-to-text |
|
library_name: transformers |
|
language: |
|
- multilingual |
|
tags: |
|
- internvl |
|
- vision |
|
- ocr |
|
- multi-image |
|
- video |
|
- custom_code |
|
base_model: OpenGVLab/InternVL2-Llama3-76B |
|
base_model_relation: quantized |
|
new_version: OpenGVLab/InternVL2_5-78B-AWQ |
|
--- |
|
|
|
# InternVL2-Llama3-76B-AWQ |
|
|
|
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π Blog\]](https://internvl.github.io/blog/) [\[π InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[π InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[π Mini-InternVL\]](https://arxiv.org/abs/2410.16261) |
|
|
|
[\[π¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π δΈζ解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[π Documents\]](https://internvl.readthedocs.io/en/latest/) |
|
|
|
## Introduction |
|
|
|
<div align="center"> |
|
<img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/> |
|
</div> |
|
|
|
### INT4 Weight-only Quantization and Deployment (W4A16) |
|
|
|
LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16. |
|
|
|
LMDeploy supports the following NVIDIA GPU for W4A16 inference: |
|
|
|
- Turing(sm75): 20 series, T4 |
|
|
|
- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100 |
|
|
|
- Ada Lovelace(sm90): 40 series |
|
|
|
Before proceeding with the quantization and inference, please ensure that lmdeploy is installed. |
|
|
|
```shell |
|
pip install lmdeploy>=0.5.3 |
|
``` |
|
|
|
This article comprises the following sections: |
|
|
|
<!-- toc --> |
|
|
|
- [Inference](#inference) |
|
- [Service](#service) |
|
|
|
<!-- tocstop --> |
|
|
|
### Inference |
|
|
|
Trying the following codes, you can perform the batched offline inference with the quantized model: |
|
|
|
```python |
|
from lmdeploy import pipeline, TurbomindEngineConfig |
|
from lmdeploy.vl import load_image |
|
|
|
model = 'OpenGVLab/InternVL2-Llama3-76B-AWQ' |
|
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg') |
|
backend_config = TurbomindEngineConfig(model_format='awq') |
|
pipe = pipeline(model, backend_config=backend_config, log_level='INFO') |
|
response = pipe(('describe this image', image)) |
|
print(response.text) |
|
``` |
|
|
|
For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md). |
|
|
|
### Service |
|
|
|
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup: |
|
|
|
```shell |
|
lmdeploy serve api_server OpenGVLab/InternVL2-Llama3-76B-AWQ --backend turbomind --server-port 23333 --model-format awq |
|
``` |
|
|
|
To use the OpenAI-style interface, you need to install OpenAI: |
|
|
|
```shell |
|
pip install openai |
|
``` |
|
|
|
Then, use the code below to make the API call: |
|
|
|
```python |
|
from openai import OpenAI |
|
|
|
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1') |
|
model_name = client.models.list().data[0].id |
|
response = client.chat.completions.create( |
|
model=model_name, |
|
messages=[{ |
|
'role': |
|
'user', |
|
'content': [{ |
|
'type': 'text', |
|
'text': 'describe this image', |
|
}, { |
|
'type': 'image_url', |
|
'image_url': { |
|
'url': |
|
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg', |
|
}, |
|
}], |
|
}], |
|
temperature=0.8, |
|
top_p=0.8) |
|
print(response) |
|
``` |
|
|
|
## License |
|
|
|
This project is released under the MIT license, while Llama3 is licensed under the Llama 3 Community License. |
|
|
|
## Citation |
|
|
|
If you find this project useful in your research, please consider citing: |
|
|
|
```BibTeX |
|
@article{gao2024mini, |
|
title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance}, |
|
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others}, |
|
journal={arXiv preprint arXiv:2410.16261}, |
|
year={2024} |
|
} |
|
@article{chen2023internvl, |
|
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks}, |
|
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng}, |
|
journal={arXiv preprint arXiv:2312.14238}, |
|
year={2023} |
|
} |
|
@article{chen2024far, |
|
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites}, |
|
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others}, |
|
journal={arXiv preprint arXiv:2404.16821}, |
|
year={2024} |
|
} |
|
``` |
|
|