czczup commited on
Commit
3c3b3f2
β€’
1 Parent(s): 5bedc7c

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -3,11 +3,19 @@ license: mit
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
 
 
 
 
 
 
 
 
6
  <div align="center">
7
  <img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
8
  </div>
9
 
10
- # INT4 Weight-only Quantization and Deployment (W4A16)
11
 
12
  LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
13
 
@@ -30,18 +38,16 @@ This article comprises the following sections:
30
  <!-- toc -->
31
 
32
  - [Inference](#inference)
33
- - [Evaluation](#evaluation)
34
  - [Service](#service)
35
 
36
  <!-- tocstop -->
37
 
38
- ## Inference
39
 
40
  Trying the following codes, you can perform the batched offline inference with the quantized model:
41
 
42
  ```python
43
- from lmdeploy import pipeline
44
- from lmdeploy.messages import TurbomindEngineConfig
45
  from lmdeploy.vl import load_image
46
 
47
  model = 'OpenGVLab/InternVL-Chat-V1-5-AWQ'
@@ -49,27 +55,72 @@ image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/t
49
  backend_config = TurbomindEngineConfig(model_format='awq')
50
  pipe = pipeline(model, backend_config=backend_config, log_level='INFO')
51
  response = pipe(('describe this image', image))
52
- print(response)
53
  ```
54
 
55
  For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
56
 
57
- ## Evaluation
58
-
59
- Please overview [this guide](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_turbomind.html) about model evaluation with LMDeploy.
60
-
61
- ## Service
62
 
63
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
64
 
65
  ```shell
66
- lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5-AWQ --backend turbomind --model-format awq
67
  ```
68
 
69
- The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
70
 
71
  ```shell
72
- lmdeploy serve api_client http://0.0.0.0:23333
73
  ```
74
 
75
- You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
6
+ # InternVL-Chat-V1-5-AWQ
7
+
8
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
9
+
10
+ [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [ι­”ζ­η€ΎεŒΊ](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \]
11
+
12
+ ## Introduction
13
+
14
  <div align="center">
15
  <img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
16
  </div>
17
 
18
+ ### INT4 Weight-only Quantization and Deployment (W4A16)
19
 
20
  LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
21
 
 
38
  <!-- toc -->
39
 
40
  - [Inference](#inference)
 
41
  - [Service](#service)
42
 
43
  <!-- tocstop -->
44
 
45
+ ### Inference
46
 
47
  Trying the following codes, you can perform the batched offline inference with the quantized model:
48
 
49
  ```python
50
+ from lmdeploy import pipeline, TurbomindEngineConfig
 
51
  from lmdeploy.vl import load_image
52
 
53
  model = 'OpenGVLab/InternVL-Chat-V1-5-AWQ'
 
55
  backend_config = TurbomindEngineConfig(model_format='awq')
56
  pipe = pipeline(model, backend_config=backend_config, log_level='INFO')
57
  response = pipe(('describe this image', image))
58
+ print(response.text)
59
  ```
60
 
61
  For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
62
 
63
+ ### Service
 
 
 
 
64
 
65
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
66
 
67
  ```shell
68
+ lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5-AWQ --model-name InternVL-Chat-V1-5-AWQ --backend turbomind --server-port 23333 --model-format awq
69
  ```
70
 
71
+ To use the OpenAI-style interface, you need to install OpenAI:
72
 
73
  ```shell
74
+ pip install openai
75
  ```
76
 
77
+ Then, use the code below to make the API call:
78
+
79
+ ```python
80
+ from openai import OpenAI
81
+
82
+ client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
83
+ model_name = client.models.list().data[0].id
84
+ response = client.chat.completions.create(
85
+ model="InternVL-Chat-V1-5-AWQ",
86
+ messages=[{
87
+ 'role':
88
+ 'user',
89
+ 'content': [{
90
+ 'type': 'text',
91
+ 'text': 'describe this image',
92
+ }, {
93
+ 'type': 'image_url',
94
+ 'image_url': {
95
+ 'url':
96
+ 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
97
+ },
98
+ }],
99
+ }],
100
+ temperature=0.8,
101
+ top_p=0.8)
102
+ print(response)
103
+ ```
104
+
105
+ ## License
106
+
107
+ This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
108
+
109
+ ## Citation
110
+
111
+ If you find this project useful in your research, please consider citing:
112
+
113
+ ```BibTeX
114
+ @article{chen2023internvl,
115
+ title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
116
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
117
+ journal={arXiv preprint arXiv:2312.14238},
118
+ year={2023}
119
+ }
120
+ @article{chen2024far,
121
+ title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
122
+ author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
123
+ journal={arXiv preprint arXiv:2404.16821},
124
+ year={2024}
125
+ }
126
+ ```
config.json CHANGED
@@ -98,7 +98,7 @@
98
  "tie_word_embeddings": false,
99
  "tokenizer_class": null,
100
  "top_k": 50,
101
- "top_p": null,
102
  "torch_dtype": "bfloat16",
103
  "torchscript": false,
104
  "transformers_version": "4.40.1",
 
98
  "tie_word_embeddings": false,
99
  "tokenizer_class": null,
100
  "top_k": 50,
101
+ "top_p": 1.0,
102
  "torch_dtype": "bfloat16",
103
  "torchscript": false,
104
  "transformers_version": "4.40.1",
configuration_intern_vit.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  import os
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  import os
configuration_internvl_chat.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
 
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
 
modeling_intern_vit.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  from typing import Optional, Tuple, Union
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  from typing import Optional, Tuple, Union