josephilo commited on
Commit
824476f
1 Parent(s): 3e5b1b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -268
README.md CHANGED
@@ -1,268 +0,0 @@
1
- ---
2
- license: mit
3
- datasets:
4
- - laion/laion2B-en
5
- - laion/laion-coco
6
- - laion/laion2B-multi
7
- - kakaobrain/coyo-700m
8
- - conceptual_captions
9
- - wanng/wukong100m
10
- pipeline_tag: visual-question-answering
11
- ---
12
-
13
- # Model Card for InternVL-Chat-V1.5
14
- <p align="center">
15
- <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/D60YzQBIzvoCvLRp2gZ0A.jpeg" alt="Image Description" width="300" height="300" />
16
- </p>
17
-
18
- > _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
19
-
20
- \[[InternVL 1.5 Technical Report](https://arxiv.org/abs/2404.16821)\] \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
21
-
22
- We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
23
- We introduce three simple designs:
24
- 1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
25
- 2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448 &times; 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
26
- 3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
27
-
28
-
29
- ## Model Details
30
- - **Model Type:** multimodal large language model (MLLM)
31
- - **Model Stats:**
32
- - Architecture: [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) + MLP + [InternLM2-Chat-20B](https://huggingface.co/internlm/internlm2-chat-20b)
33
- - Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
34
- - Params: 25.5B
35
-
36
- - **Training Strategy:**
37
- - Pretraining Stage
38
- - Learnable Component: ViT + MLP
39
- - Data: Please see our technical report.
40
- - SFT Stage
41
- - Learnable Component: ViT + MLP + LLM
42
- - Data: Please see our technical report.
43
-
44
- ## Released Models
45
-
46
- | Model | Vision Foundation Model | Release Date |Note |
47
- | :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
48
- | InternVL-Chat-V1.5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
49
- | InternVL-Chat-V1.2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
50
- | InternVL-Chat-V1.2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
51
- | InternVL-Chat-V1.1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
52
-
53
- ## Performance
54
-
55
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4b85G7txoJ_LpT19SZJ4A.png)
56
-
57
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/i2vp6zSHPS3UIr-1Q9cSe.png)
58
-
59
- ## Examples
60
-
61
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/R34jISP4K1U17m9yNP38O.png)
62
-
63
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/ChkU9XtlsjH0l2EqlO_is.png)
64
-
65
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/1TFxIcf96ANRPLoy4-rbh.png)
66
-
67
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/Wpjo1Sdwf7XcEDevqwcr-.png)
68
-
69
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/kO4-J38sN8TFtmQ5mIBMS.png)
70
-
71
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/qPnTe3Q9UBy8wbclOsmWk.png)
72
-
73
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/l_BILRi13CbZNzbZYn6o6.png)
74
-
75
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/2782y7RnvGBogYEIG__7S.png)
76
-
77
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/RyO35PTH14OFiwyxtAZM2.png)
78
-
79
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/xiLZXWL-JiCTVPnV_VxS2.png)
80
-
81
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/gqX46Tt5jvrcVqb0vcf06.png)
82
-
83
-
84
-
85
- ## Model Usage
86
-
87
- We provide an example code to run InternVL-Chat-V1.5 using `transformers`.
88
-
89
- You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
90
-
91
- ```python
92
- from transformers import AutoTokenizer, AutoModel
93
- import torch
94
- import torchvision.transforms as T
95
- from PIL import Image
96
-
97
- from torchvision.transforms.functional import InterpolationMode
98
-
99
-
100
- IMAGENET_MEAN = (0.485, 0.456, 0.406)
101
- IMAGENET_STD = (0.229, 0.224, 0.225)
102
-
103
-
104
- def build_transform(input_size):
105
- MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
106
- transform = T.Compose([
107
- T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
108
- T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
109
- T.ToTensor(),
110
- T.Normalize(mean=MEAN, std=STD)
111
- ])
112
- return transform
113
-
114
-
115
- def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
116
- best_ratio_diff = float('inf')
117
- best_ratio = (1, 1)
118
- area = width * height
119
- for ratio in target_ratios:
120
- target_aspect_ratio = ratio[0] / ratio[1]
121
- ratio_diff = abs(aspect_ratio - target_aspect_ratio)
122
- if ratio_diff < best_ratio_diff:
123
- best_ratio_diff = ratio_diff
124
- best_ratio = ratio
125
- elif ratio_diff == best_ratio_diff:
126
- if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
127
- best_ratio = ratio
128
- return best_ratio
129
-
130
-
131
- def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
132
- orig_width, orig_height = image.size
133
- aspect_ratio = orig_width / orig_height
134
-
135
- # calculate the existing image aspect ratio
136
- target_ratios = set(
137
- (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
138
- i * j <= max_num and i * j >= min_num)
139
- target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
140
-
141
- # find the closest aspect ratio to the target
142
- target_aspect_ratio = find_closest_aspect_ratio(
143
- aspect_ratio, target_ratios, orig_width, orig_height, image_size)
144
-
145
- # calculate the target width and height
146
- target_width = image_size * target_aspect_ratio[0]
147
- target_height = image_size * target_aspect_ratio[1]
148
- blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
149
-
150
- # resize the image
151
- resized_img = image.resize((target_width, target_height))
152
- processed_images = []
153
- for i in range(blocks):
154
- box = (
155
- (i % (target_width // image_size)) * image_size,
156
- (i // (target_width // image_size)) * image_size,
157
- ((i % (target_width // image_size)) + 1) * image_size,
158
- ((i // (target_width // image_size)) + 1) * image_size
159
- )
160
- # split the image
161
- split_img = resized_img.crop(box)
162
- processed_images.append(split_img)
163
- assert len(processed_images) == blocks
164
- if use_thumbnail and len(processed_images) != 1:
165
- thumbnail_img = image.resize((image_size, image_size))
166
- processed_images.append(thumbnail_img)
167
- return processed_images
168
-
169
-
170
- def load_image(image_file, input_size=448, max_num=6):
171
- image = Image.open(image_file).convert('RGB')
172
- transform = build_transform(input_size=input_size)
173
- images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
174
- pixel_values = [transform(image) for image in images]
175
- pixel_values = torch.stack(pixel_values)
176
- return pixel_values
177
-
178
-
179
- path = "OpenGVLab/InternVL-Chat-V1-5"
180
- # If you have an 80G A100 GPU, you can put the entire model on a single GPU.
181
- model = AutoModel.from_pretrained(
182
- path,
183
- torch_dtype=torch.bfloat16,
184
- low_cpu_mem_usage=True,
185
- trust_remote_code=True).eval().cuda()
186
- # Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
187
- # import os
188
- # os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
189
- # model = AutoModel.from_pretrained(
190
- # path,
191
- # torch_dtype=torch.bfloat16,
192
- # low_cpu_mem_usage=True,
193
- # trust_remote_code=True,
194
- # device_map='auto').eval()
195
-
196
- tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
197
- # set the max number of tiles in `max_num`
198
- pixel_values = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
199
-
200
- generation_config = dict(
201
- num_beams=1,
202
- max_new_tokens=512,
203
- do_sample=False,
204
- )
205
-
206
- # single-round single-image conversation
207
- question = "请详细描述图片" # Please describe the picture in detail
208
- response = model.chat(tokenizer, pixel_values, question, generation_config)
209
- print(question, response)
210
-
211
- # multi-round single-image conversation
212
- question = "请详细描述图片" # Please describe the picture in detail
213
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
214
- print(question, response)
215
-
216
- question = "请根据图片写一首诗" # Please write a poem according to the picture
217
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
218
- print(question, response)
219
-
220
- # multi-round multi-image conversation
221
- pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
222
- pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
223
- pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
224
-
225
- question = "详细描述这两张图片" # Describe the two pictures in detail
226
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
227
- print(question, response)
228
-
229
- question = "这两张图片的相同点和区别分别是什么" # What are the similarities and differences between these two pictures
230
- response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
231
- print(question, response)
232
-
233
- # batch inference (single image per sample)
234
- pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
235
- pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
236
- image_counts = [pixel_values1.size(0), pixel_values2.size(0)]
237
- pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
238
-
239
- questions = ["Describe the image in detail."] * len(image_counts)
240
- responses = model.batch_chat(tokenizer, pixel_values,
241
- image_counts=image_counts,
242
- questions=questions,
243
- generation_config=generation_config)
244
- for question, response in zip(questions, responses):
245
- print(question)
246
- print(response)
247
- ```
248
-
249
- ## Citation
250
-
251
- If you find this project useful in your research, please consider citing:
252
-
253
- ```BibTeX
254
- @article{chen2023internvl,
255
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
256
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
257
- journal={arXiv preprint arXiv:2312.14238},
258
- year={2023}
259
- }
260
- ```
261
-
262
- ## License
263
-
264
- This project is released under the MIT license.
265
-
266
- ## Acknowledgement
267
-
268
- InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!