InternVL2-2B-AWQ / README.md
czczup's picture
Update README.md
acd8abb verified
|
raw
history blame
3.11 kB
metadata
license: mit
pipeline_tag: image-text-to-text

INT4 Weight-only Quantization and Deployment (W4A16)

LMDeploy adopts AWQ algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.

LMDeploy supports the following NVIDIA GPU for W4A16 inference:

  • Turing(sm75): 20 series, T4

  • Ampere(sm80,sm86): 30 series, A10, A16, A30, A100

  • Ada Lovelace(sm90): 40 series

Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.

pip install lmdeploy[all]

This article comprises the following sections:

Inference

For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file chat_template.json.

{
    "model_name":"internlm2",
    "meta_instruction":"你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。",
    "stop_words":["<|im_start|>", "<|im_end|>"]
}

Trying the following codes, you can perform the batched offline inference with the quantized model:

from lmdeploy import pipeline
from lmdeploy.model import ChatTemplateConfig
from lmdeploy.vl import load_image

model = 'OpenGVLab/InternVL2-2B-AWQ'
chat_template_config = ChatTemplateConfig.from_json('chat_template.json')
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, chat_template_config=chat_template_config, log_level='INFO')
response = pipe(('describe this image', image))
print(response)

For more information about the pipeline parameters, please refer to here.

Evaluation

Please overview this guide about model evaluation with LMDeploy.

Service

LMDeploy's api_server enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:

lmdeploy serve api_server OpenGVLab/InternVL2-2B-AWQ --backend turbomind --model-format awq --chat-template chat_template.json

The default port of api_server is 23333. After the server is launched, you can communicate with server on terminal through api_client:

lmdeploy serve api_client http://0.0.0.0:23333

You can overview and try out api_server APIs online by swagger UI at http://0.0.0.0:23333, or you can also read the API specification from here.