radna commited on
Commit
9e3e14b
1 Parent(s): ccfb1ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +241 -241
README.md CHANGED
@@ -1,241 +1,241 @@
1
- ---
2
- license: mit
3
- datasets:
4
- - laion/laion2B-en
5
- - laion/laion-coco
6
- - laion/laion2B-multi
7
- - kakaobrain/coyo-700m
8
- - conceptual_captions
9
- - wanng/wukong100m
10
- pipeline_tag: visual-question-answering
11
- ---
12
-
13
- # Model Card for Mini-InternVL-Chat-2B-V1-5
14
-
15
- <center>
16
- <p><img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/pvfKc16O-ej91632FHaIK.png" style="width:80%;" alt="image/png"></p>
17
- </center>
18
-
19
- [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/)
20
-
21
- [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
22
-
23
-
24
- You can run multimodal large models using a 1080Ti now.
25
-
26
- We are delighted to introduce the Mini-InternVL-Chat series. In the era of large language models, many researchers have started to focus on smaller language models, such as Gemma-2B, Qwen-1.8B, and InternLM2-1.8B. Inspired by their efforts, we have distilled our vision foundation model [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) down to 300M and used [InternLM2-Chat-1.8B](https://huggingface.co/internlm/internlm2-chat-1_8b) or [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) as our language model. This resulted in a small multimodal model with excellent performance.
27
-
28
- As shown in the figure below, we adopted the same model architecture as InternVL 1.5. We simply replaced the original InternViT-6B with InternViT-300M and InternLM2-Chat-20B with InternLM2-Chat-1.8B / Phi-3-mini-128k-instruct. For training, we used the same data as InternVL 1.5 to train this smaller model. Additionally, due to the lower training costs of smaller models, we used a context length of 8K during training.
29
-
30
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/rDyoe66Sqev44T0wsP5Z7.png)
31
-
32
- ## Model Details
33
-
34
- - **Model Type:** multimodal large language model (MLLM)
35
-
36
- - **Model Stats:**
37
-
38
- - Architecture: [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) + MLP + [InternLM2-Chat-1.8B](https://huggingface.co/internlm/internlm2-chat-1_8b)
39
- - Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
40
- - Params: 2.2B
41
-
42
- - **Training Strategy:**
43
-
44
- - Learnable component in the pretraining stage: ViT + MLP
45
- - Learnable component in the finetuning stage: ViT + MLP + LLM
46
- - For more details on training hyperparameters, take a look at our code: [pretrain](<>) | [finetune](<>)
47
-
48
- ## Released Models
49
-
50
- | Model | Vision Foundation Model | Release Date | Note |
51
- | :----------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------: | :----------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
52
- | InternVL-Chat-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) | 2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new) |
53
- | InternVL-Chat-V1-2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) | InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) | 2024.02.21 | more SFT data and stronger |
54
- | InternVL-Chat-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) | InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) | 2024.02.11 | scaling up LLM to 34B |
55
- | InternVL-Chat-V1-1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) | InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) | 2024.01.24 | support Chinese and stronger OCR |
56
-
57
- ## Performance
58
-
59
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/BbsilHS8PjwZwlc330_g4.png)
60
-
61
- ## Model Usage
62
-
63
- We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
64
-
65
- You can also use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
66
-
67
- > Please use transformers==4.37.2 to ensure the model works normally.
68
-
69
- ```python
70
- from transformers import AutoTokenizer, AutoModel
71
- import torch
72
- import torchvision.transforms as T
73
- from PIL import Image
74
-
75
- from torchvision.transforms.functional import InterpolationMode
76
-
77
-
78
- IMAGENET_MEAN = (0.485, 0.456, 0.406)
79
- IMAGENET_STD = (0.229, 0.224, 0.225)
80
-
81
-
82
- def build_transform(input_size):
83
- MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
84
- transform = T.Compose([
85
- T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
86
- T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
87
- T.ToTensor(),
88
- T.Normalize(mean=MEAN, std=STD)
89
- ])
90
- return transform
91
-
92
-
93
- def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
94
- best_ratio_diff = float('inf')
95
- best_ratio = (1, 1)
96
- area = width * height
97
- for ratio in target_ratios:
98
- target_aspect_ratio = ratio[0] / ratio[1]
99
- ratio_diff = abs(aspect_ratio - target_aspect_ratio)
100
- if ratio_diff < best_ratio_diff:
101
- best_ratio_diff = ratio_diff
102
- best_ratio = ratio
103
- elif ratio_diff == best_ratio_diff:
104
- if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
105
- best_ratio = ratio
106
- return best_ratio
107
-
108
-
109
- def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
110
- orig_width, orig_height = image.size
111
- aspect_ratio = orig_width / orig_height
112
-
113
- # calculate the existing image aspect ratio
114
- target_ratios = set(
115
- (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
116
- i * j <= max_num and i * j >= min_num)
117
- target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
118
-
119
- # find the closest aspect ratio to the target
120
- target_aspect_ratio = find_closest_aspect_ratio(
121
- aspect_ratio, target_ratios, orig_width, orig_height, image_size)
122
-
123
- # calculate the target width and height
124
- target_width = image_size * target_aspect_ratio[0]
125
- target_height = image_size * target_aspect_ratio[1]
126
- blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
127
-
128
- # resize the image
129
- resized_img = image.resize((target_width, target_height))
130
- processed_images = []
131
- for i in range(blocks):
132
- box = (
133
- (i % (target_width // image_size)) * image_size,
134
- (i // (target_width // image_size)) * image_size,
135
- ((i % (target_width // image_size)) + 1) * image_size,
136
- ((i // (target_width // image_size)) + 1) * image_size
137
- )
138
- # split the image
139
- split_img = resized_img.crop(box)
140
- processed_images.append(split_img)
141
- assert len(processed_images) == blocks
142
- if use_thumbnail and len(processed_images) != 1:
143
- thumbnail_img = image.resize((image_size, image_size))
144
- processed_images.append(thumbnail_img)
145
- return processed_images
146
-
147
-
148
- def load_image(image_file, input_size=448, max_num=6):
149
- image = Image.open(image_file).convert('RGB')
150
- transform = build_transform(input_size=input_size)
151
- images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
152
- pixel_values = [transform(image) for image in images]
153
- pixel_values = torch.stack(pixel_values)
154
- return pixel_values
155
-
156
- path = "OpenGVLab/Mini-InternVL-Chat-2B-V1-5"
157
- model = AutoModel.from_pretrained(
158
- path,
159
- torch_dtype=torch.bfloat16,
160
- low_cpu_mem_usage=True,
161
- trust_remote_code=True).eval().cuda()
162
-
163
- tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
164
- # set the max number of tiles in `max_num`
165
- pixel_values = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
166
-
167
- generation_config = dict(
168
- num_beams=1,
169
- max_new_tokens=512,
170
- do_sample=False,
171
- )
172
-
173
- # single-round single-image conversation
174
- question = "请详细描述图片" # Please describe the picture in detail
175
- response = model.chat(tokenizer, pixel_values, question, generation_config)
176
- print(question, response)
177
-
178
- # multi-round single-image conversation
179
- question = "请详细描述图片" # Please describe the picture in detail
180
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
181
- print(question, response)
182
-
183
- question = "请根据图片写一首诗" # Please write a poem according to the picture
184
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
185
- print(question, response)
186
-
187
- # multi-round multi-image conversation
188
- pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
189
- pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
190
- pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
191
-
192
- question = "详细描述这两张图片" # Describe the two pictures in detail
193
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
194
- print(question, response)
195
-
196
- question = "这两张图片的相同点和区别分别是什么" # What are the similarities and differences between these two pictures
197
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
198
- print(question, response)
199
-
200
- # batch inference (single image per sample)
201
- pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
202
- pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
203
- image_counts = [pixel_values1.size(0), pixel_values2.size(0)]
204
- pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
205
-
206
- questions = ["Describe the image in detail."] * len(image_counts)
207
- responses = model.batch_chat(tokenizer, pixel_values,
208
- image_counts=image_counts,
209
- questions=questions,
210
- generation_config=generation_config)
211
- for question, response in zip(questions, responses):
212
- print(question)
213
- print(response)
214
- ```
215
-
216
- ## Citation
217
-
218
- If you find this project useful in your research, please consider citing:
219
-
220
- ```BibTeX
221
- @article{chen2023internvl,
222
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
223
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
224
- journal={arXiv preprint arXiv:2312.14238},
225
- year={2023}
226
- }
227
- @article{chen2024far,
228
- title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
229
- author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
230
- journal={arXiv preprint arXiv:2404.16821},
231
- year={2024}
232
- }
233
- ```
234
-
235
- ## License
236
-
237
- This project is released under the MIT license.
238
-
239
- ## Acknowledgement
240
-
241
- InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - laion/laion2B-en
5
+ - laion/laion-coco
6
+ - laion/laion2B-multi
7
+ - kakaobrain/coyo-700m
8
+ - conceptual_captions
9
+ - wanng/wukong100m
10
+ pipeline_tag: visual-question-answering
11
+ ---
12
+
13
+ # Model Card for Mini-InternVL-Chat-2B-V1-5
14
+
15
+ <center>
16
+ <p><img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/pvfKc16O-ej91632FHaIK.png" style="width:80%;" alt="image/png"></p>
17
+ </center>
18
+
19
+ [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/)
20
+
21
+ [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
22
+
23
+
24
+ You can run multimodal large models using a 1080Ti now.
25
+
26
+ We are delighted to introduce the Mini-InternVL-Chat series. In the era of large language models, many researchers have started to focus on smaller language models, such as Gemma-2B, Qwen-1.8B, and InternLM2-1.8B. Inspired by their efforts, we have distilled our vision foundation model [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) down to 300M and used [InternLM2-Chat-1.8B](https://huggingface.co/internlm/internlm2-chat-1_8b) or [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) as our language model. This resulted in a small multimodal model with excellent performance.
27
+
28
+ As shown in the figure below, we adopted the same model architecture as InternVL 1.5. We simply replaced the original InternViT-6B with InternViT-300M and InternLM2-Chat-20B with InternLM2-Chat-1.8B / Phi-3-mini-128k-instruct. For training, we used the same data as InternVL 1.5 to train this smaller model. Additionally, due to the lower training costs of smaller models, we used a context length of 8K during training.
29
+
30
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/rDyoe66Sqev44T0wsP5Z7.png)
31
+
32
+ ## Model Details
33
+
34
+ - **Model Type:** multimodal large language model (MLLM)
35
+
36
+ - **Model Stats:**
37
+
38
+ - Architecture: [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) + MLP + [InternLM2-Chat-1.8B](https://huggingface.co/internlm/internlm2-chat-1_8b)
39
+ - Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
40
+ - Params: 2.2B
41
+
42
+ - **Training Strategy:**
43
+
44
+ - Learnable component in the pretraining stage: ViT + MLP
45
+ - Learnable component in the finetuning stage: ViT + MLP + LLM
46
+ - For more details on training hyperparameters, take a look at our code: [pretrain](<>) | [finetune](<>)
47
+
48
+ ## Released Models
49
+
50
+ | Model | Vision Foundation Model | Release Date | Note |
51
+ | :----------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------: | :----------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
52
+ | InternVL-Chat-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) | 2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new) |
53
+ | InternVL-Chat-V1-2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) | InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) | 2024.02.21 | more SFT data and stronger |
54
+ | InternVL-Chat-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) | InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) | 2024.02.11 | scaling up LLM to 34B |
55
+ | InternVL-Chat-V1-1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) | InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) | 2024.01.24 | support Chinese and stronger OCR |
56
+
57
+ ## Performance
58
+
59
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/BbsilHS8PjwZwlc330_g4.png)
60
+
61
+ ## Model Usage
62
+
63
+ We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
64
+
65
+ You can also use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
66
+
67
+ > Please use transformers==4.37.2 to ensure the model works normally.
68
+
69
+ ```python
70
+ from transformers import AutoTokenizer, AutoModel
71
+ import torch
72
+ import torchvision.transforms as T
73
+ from PIL import Image
74
+
75
+ from torchvision.transforms.functional import InterpolationMode
76
+
77
+
78
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
79
+ IMAGENET_STD = (0.229, 0.224, 0.225)
80
+
81
+
82
+ def build_transform(input_size):
83
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
84
+ transform = T.Compose([
85
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
86
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
87
+ T.ToTensor(),
88
+ T.Normalize(mean=MEAN, std=STD)
89
+ ])
90
+ return transform
91
+
92
+
93
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
94
+ best_ratio_diff = float('inf')
95
+ best_ratio = (1, 1)
96
+ area = width * height
97
+ for ratio in target_ratios:
98
+ target_aspect_ratio = ratio[0] / ratio[1]
99
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
100
+ if ratio_diff < best_ratio_diff:
101
+ best_ratio_diff = ratio_diff
102
+ best_ratio = ratio
103
+ elif ratio_diff == best_ratio_diff:
104
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
105
+ best_ratio = ratio
106
+ return best_ratio
107
+
108
+
109
+ def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
110
+ orig_width, orig_height = image.size
111
+ aspect_ratio = orig_width / orig_height
112
+
113
+ # calculate the existing image aspect ratio
114
+ target_ratios = set(
115
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
116
+ i * j <= max_num and i * j >= min_num)
117
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
118
+
119
+ # find the closest aspect ratio to the target
120
+ target_aspect_ratio = find_closest_aspect_ratio(
121
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
122
+
123
+ # calculate the target width and height
124
+ target_width = image_size * target_aspect_ratio[0]
125
+ target_height = image_size * target_aspect_ratio[1]
126
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
127
+
128
+ # resize the image
129
+ resized_img = image.resize((target_width, target_height))
130
+ processed_images = []
131
+ for i in range(blocks):
132
+ box = (
133
+ (i % (target_width // image_size)) * image_size,
134
+ (i // (target_width // image_size)) * image_size,
135
+ ((i % (target_width // image_size)) + 1) * image_size,
136
+ ((i // (target_width // image_size)) + 1) * image_size
137
+ )
138
+ # split the image
139
+ split_img = resized_img.crop(box)
140
+ processed_images.append(split_img)
141
+ assert len(processed_images) == blocks
142
+ if use_thumbnail and len(processed_images) != 1:
143
+ thumbnail_img = image.resize((image_size, image_size))
144
+ processed_images.append(thumbnail_img)
145
+ return processed_images
146
+
147
+
148
+ def load_image(image_file, input_size=448, max_num=6):
149
+ image = Image.open(image_file).convert('RGB')
150
+ transform = build_transform(input_size=input_size)
151
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
152
+ pixel_values = [transform(image) for image in images]
153
+ pixel_values = torch.stack(pixel_values)
154
+ return pixel_values
155
+
156
+ path = "radna/mini_intern_chat_triton_2b"
157
+ model = AutoModel.from_pretrained(
158
+ path,
159
+ torch_dtype=torch.bfloat16,
160
+ low_cpu_mem_usage=True,
161
+ trust_remote_code=True).eval().cuda()
162
+
163
+ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
164
+ # set the max number of tiles in `max_num`
165
+ pixel_values = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
166
+
167
+ generation_config = dict(
168
+ num_beams=1,
169
+ max_new_tokens=512,
170
+ do_sample=False,
171
+ )
172
+
173
+ # single-round single-image conversation
174
+ question = "请详细描述图片" # Please describe the picture in detail
175
+ response = model.chat(tokenizer, pixel_values, question, generation_config)
176
+ print(question, response)
177
+
178
+ # multi-round single-image conversation
179
+ question = "请详细描述图片" # Please describe the picture in detail
180
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
181
+ print(question, response)
182
+
183
+ question = "请根据图片写一首诗" # Please write a poem according to the picture
184
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
185
+ print(question, response)
186
+
187
+ # multi-round multi-image conversation
188
+ pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
189
+ pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
190
+ pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
191
+
192
+ question = "详细描述这两张图片" # Describe the two pictures in detail
193
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
194
+ print(question, response)
195
+
196
+ question = "这两张图片的相同点和区别分别是什么" # What are the similarities and differences between these two pictures
197
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
198
+ print(question, response)
199
+
200
+ # batch inference (single image per sample)
201
+ pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
202
+ pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
203
+ image_counts = [pixel_values1.size(0), pixel_values2.size(0)]
204
+ pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
205
+
206
+ questions = ["Describe the image in detail."] * len(image_counts)
207
+ responses = model.batch_chat(tokenizer, pixel_values,
208
+ image_counts=image_counts,
209
+ questions=questions,
210
+ generation_config=generation_config)
211
+ for question, response in zip(questions, responses):
212
+ print(question)
213
+ print(response)
214
+ ```
215
+
216
+ ## Citation
217
+
218
+ If you find this project useful in your research, please consider citing:
219
+
220
+ ```BibTeX
221
+ @article{chen2023internvl,
222
+ title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
223
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
224
+ journal={arXiv preprint arXiv:2312.14238},
225
+ year={2023}
226
+ }
227
+ @article{chen2024far,
228
+ title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
229
+ author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
230
+ journal={arXiv preprint arXiv:2404.16821},
231
+ year={2024}
232
+ }
233
+ ```
234
+
235
+ ## License
236
+
237
+ This project is released under the MIT license.
238
+
239
+ ## Acknowledgement
240
+
241
+ InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!