File size: 5,503 Bytes
673d516
 
acd8abb
ea12635
2bf0784
 
 
ea12635
 
 
 
 
673d516
acd8abb
b252962
 
e868ccf
b252962
e868ccf
b252962
 
 
673d516
 
 
 
b252962
673d516
 
 
 
 
 
 
 
 
 
 
 
 
 
a8d5af8
673d516
 
 
 
 
 
 
 
 
 
6351081
b252962
6351081
673d516
 
 
a7311dd
673d516
 
 
 
ef43f6e
a7311dd
673d516
bba800e
673d516
 
 
 
b252962
673d516
a7311dd
673d516
 
a7311dd
673d516
 
ef43f6e
673d516
bba800e
 
 
 
ef43f6e
 
 
 
 
 
 
 
a7311dd
ef43f6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
673d516
 
ef43f6e
 
2bf0784
ef43f6e
 
 
 
 
 
e868ccf
 
 
 
 
 
0e602d5
 
 
 
 
 
ef43f6e
 
 
 
 
 
e868ccf
 
 
 
 
 
2bf0784
ef43f6e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model: OpenGVLab/InternVL2-2B
new_version: OpenGVLab/InternVL2_5-2B-AWQ
base_model_relation: quantized
language:
  - multilingual
tags:
  - internvl
  - custom_code
---

# InternVL2-2B-AWQ

[\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL)  [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238)  [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821)  [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261)  [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271)

[\[πŸ†• Blog\]](https://internvl.github.io/blog/)  [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/)  [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL)  [\[πŸš€ Quick Start\]](#quick-start)  [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)

## Introduction

<div align="center">
  <img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
</div>

### INT4 Weight-only Quantization and Deployment (W4A16)

LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.

LMDeploy supports the following NVIDIA GPU for W4A16 inference:

- Turing(sm75): 20 series, T4

- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100

- Ada Lovelace(sm90): 40 series

Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.

```shell
pip install lmdeploy>=0.5.3
```

This article comprises the following sections:

<!-- toc -->

- [Inference](#inference)
- [Service](#service)

<!-- tocstop -->

### Inference

Trying the following codes, you can perform the batched offline inference with the quantized model:

```python
from lmdeploy import pipeline, TurbomindEngineConfig
from lmdeploy.vl import load_image

model = 'OpenGVLab/InternVL2-2B-AWQ'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
backend_config = TurbomindEngineConfig(model_format='awq')
pipe = pipeline(model, backend_config=backend_config, log_level='INFO')
response = pipe(('describe this image', image))
print(response.text)
```

For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).

### Service

LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:

```shell
lmdeploy serve api_server OpenGVLab/InternVL2-2B-AWQ --backend turbomind --server-port 23333 --model-format awq
```

To use the OpenAI-style interface, you need to install OpenAI:

```shell
pip install openai
```

Then, use the code below to make the API call:

```python
from openai import OpenAI

client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
    model=model_name,
    messages=[{
        'role':
        'user',
        'content': [{
            'type': 'text',
            'text': 'describe this image',
        }, {
            'type': 'image_url',
            'image_url': {
                'url':
                'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
            },
        }],
    }],
    temperature=0.8,
    top_p=0.8)
print(response)
```

## License

This project is released under the MIT License. This project uses the pre-trained internlm2-chat-1_8b as a component, which is licensed under the Apache License 2.0.

## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{chen2024expanding,
  title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
  author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
  journal={arXiv preprint arXiv:2412.05271},
  year={2024}
}
@article{gao2024mini,
  title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
  author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
  journal={arXiv preprint arXiv:2410.16261},
  year={2024}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
@inproceedings{chen2024internvl,
  title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={24185--24198},
  year={2024}
}
```