--- license: mit pipeline_tag: image-text-to-text --- # InternVL2-2B-AWQ [\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [魔搭社区](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \] ## Introduction
### INT4 Weight-only Quantization and Deployment (W4A16) LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16. LMDeploy supports the following NVIDIA GPU for W4A16 inference: - Turing(sm75): 20 series, T4 - Ampere(sm80,sm86): 30 series, A10, A16, A30, A100 - Ada Lovelace(sm90): 40 series Before proceeding with the quantization and inference, please ensure that lmdeploy is installed. ```shell pip install lmdeploy[all] ``` This article comprises the following sections: - [Inference](#inference) - [Service](#service) ### Inference Trying the following codes, you can perform the batched offline inference with the quantized model: ```python from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig from lmdeploy.vl import load_image model = 'OpenGVLab/InternVL2-2B-AWQ' system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。' image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg') chat_template_config = ChatTemplateConfig('internvl-internlm2') chat_template_config.meta_instruction = system_prompt backend_config = TurbomindEngineConfig(model_format='awq') pipe = pipeline(model, chat_template_config=chat_template_config, backend_config=backend_config)) response = pipe(('describe this image', image)) print(response.text) ``` For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md). ### Service To deploy InternVL2 as an API, please configure the chat template config first. Create the following JSON file `chat_template.json`. ```json { "model_name":"internvl-internlm2", "meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。", "stop_words":["<|im_start|>", "<|im_end|>"] } ``` LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup. ```shell lmdeploy serve api_server OpenGVLab/InternVL2-2B-AWQ --model-name InternVL2-2B-AWQ --backend turbomind --server-port 23333 --model-format awq --chat-template chat_template.json ``` To use the OpenAI-style interface, you need to install OpenAI: ```shell pip install openai ``` Then, use the code below to make the API call: ```python from openai import OpenAI client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1') model_name = client.models.list().data[0].id response = client.chat.completions.create( model="InternVL2-2B-AWQ", messages=[{ 'role': 'user', 'content': [{ 'type': 'text', 'text': 'describe this image', }, { 'type': 'image_url', 'image_url': { 'url': 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg', }, }], }], temperature=0.8, top_p=0.8) print(response) ``` ## License This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license. ## Citation If you find this project useful in your research, please consider citing: ```BibTeX @article{chen2023internvl, title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks}, author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng}, journal={arXiv preprint arXiv:2312.14238}, year={2023} } @article{chen2024far, title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites}, author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others}, journal={arXiv preprint arXiv:2404.16821}, year={2024} } ```