simonJJJ commited on
Commit
473db51
1 Parent(s): 835f433
Files changed (2) hide show
  1. README.md +568 -0
  2. tokenization_qwen.py +7 -9
README.md ADDED
@@ -0,0 +1,568 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - qwen
7
+ pipeline_tag: text-generation
8
+ inference: false
9
+ ---
10
+
11
+ # Qwen-VL
12
+
13
+ <br>
14
+
15
+ <p align="center">
16
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo.jpg" width="400"/>
17
+ <p>
18
+ <br>
19
+
20
+ <p align="center">
21
+ Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>&nbsp | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>&nbsp | &nbsp<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>&nbsp | &nbsp<a href="https://github.com/QwenLM/Qwen-VL/blob/main/visual_memo.md">Report</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/9bjvspyu">Discord</a>
22
+
23
+ </p>
24
+ <br>
25
+
26
+ <p align="center">
27
+ <a href="README_CN.md">中文</a>&nbsp | &nbsp English
28
+ </p>
29
+ <br><br>
30
+
31
+ **Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
32
+ - **Strong performance**: It significantly surpasses existing open-source Large Vision Language Models (LVLM) under similar scale settings on multiple English evaluation benchmarks (including Zero-shot caption, VQA, DocVQA, and Grounding).
33
+ - **Multi-lingual LVLM support text recognization**: Qwen-VL naturally supports multi-lingual conversation, and it promotes end-to-end recognition of Chinese and English bi-lingual text in images.
34
+ - **Multi-image interleaved conversations**: This feature allows for the input and comparison of multiple images, as well as the ability to specify questions related to the images and engage in multi-image storytelling.
35
+ - **First generalist model support grounding in Chinese**: Detecting bounding boxes through open-domain language expression in both Chinese and English.
36
+ - **Fine-grained recognization and understanding**: Compared to the 224 resolution currently used by other open-source LVLM, the 448 resolution promotes fine-grained text recognition, document QA, and bounding box annotation.
37
+
38
+ <br>
39
+ <p align="center">
40
+ <img src="assets/demo_vl.gif" width="400"/>
41
+ <p>
42
+ <br>
43
+
44
+ We release two models of the Qwen-VL series:
45
+ - Qwen-VL: The pre-trained LVLM model uses Qwen-7B as the initialization of the LLM, and [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) as the initialization of the visual encoder. And connects them with a randomly initialized cross-attention layer. Qwen-VL was trained on about 1.5B image-text paired data. The final image input resolution is 448.
46
+ - Qwen-VL-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques.
47
+
48
+ For more details about Qwen-VL, please refer to our [technical memo](visual_memo.md).
49
+
50
+ ## Evaluation
51
+
52
+ We evaluated the model's ability from two perspectives:
53
+ 1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
54
+ - Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
55
+ - General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
56
+ - Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
57
+ - Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
58
+
59
+ 2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
60
+ - The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
61
+ - In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
62
+ - The benchmark includes both English and Chinese versions.
63
+
64
+ The results of the evaluation are as follows:
65
+
66
+ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
67
+
68
+ <p align="center">
69
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
70
+ <p>
71
+
72
+ ### Zero-shot Caption & General VQA
73
+ <table>
74
+ <thead>
75
+ <tr>
76
+ <th rowspan="2">Model type</th>
77
+ <th rowspan="2">Model</th>
78
+ <th colspan="2">Zero-shot Caption</th>
79
+ <th colspan="5">General VQA</th>
80
+ </tr>
81
+ <tr>
82
+ <th>NoCaps</th>
83
+ <th>Flickr30K</th>
84
+ <th>VQAv2<sup>dev</sup></th>
85
+ <th>OK-VQA</th>
86
+ <th>GQA</th>
87
+ <th>SciQA-Img<br>(0-shot)</th>
88
+ <th>VizWiz<br>(0-shot)</th>
89
+ </tr>
90
+ </thead>
91
+ <tbody align="center">
92
+ <tr>
93
+ <td rowspan="12">Generalist<br>Models</td>
94
+ <td>Flamingo-9B</td>
95
+ <td>-</td>
96
+ <td>61.5</td>
97
+ <td>51.8</td>
98
+ <td>44.7</td>
99
+ <td>-</td>
100
+ <td>-</td>
101
+ <td>28.8</td>
102
+ </tr>
103
+ <tr>
104
+ <td>Flamingo-80B</td>
105
+ <td>-</td>
106
+ <td>67.2</td>
107
+ <td>56.3</td>
108
+ <td>50.6</td>
109
+ <td>-</td>
110
+ <td>-</td>
111
+ <td>31.6</td>
112
+ </tr>
113
+ <tr>
114
+ <td>Unified-IO-XL</td>
115
+ <td>100.0</td>
116
+ <td>-</td>
117
+ <td>77.9</td>
118
+ <td>54.0</td>
119
+ <td>-</td>
120
+ <td>-</td>
121
+ <td>-</td>
122
+ </tr>
123
+ <tr>
124
+ <td>Kosmos-1</td>
125
+ <td>-</td>
126
+ <td>67.1</td>
127
+ <td>51.0</td>
128
+ <td>-</td>
129
+ <td>-</td>
130
+ <td>-</td>
131
+ <td>29.2</td>
132
+ </tr>
133
+ <tr>
134
+ <td>Kosmos-2</td>
135
+ <td>-</td>
136
+ <td>66.7</td>
137
+ <td>45.6</td>
138
+ <td>-</td>
139
+ <td>-</td>
140
+ <td>-</td>
141
+ <td>-</td>
142
+ </tr>
143
+ <tr>
144
+ <td>BLIP-2 (Vicuna-13B)</td>
145
+ <td>103.9</td>
146
+ <td>71.6</td>
147
+ <td>65.0</td>
148
+ <td>45.9</td>
149
+ <td>32.3</td>
150
+ <td>61.0</td>
151
+ <td>19.6</td>
152
+ </tr>
153
+ <tr>
154
+ <td>InstructBLIP (Vicuna-13B)</td>
155
+ <td><strong>121.9</strong></td>
156
+ <td>82.8</td>
157
+ <td>-</td>
158
+ <td>-</td>
159
+ <td>49.5</td>
160
+ <td>63.1</td>
161
+ <td>33.4</td>
162
+ </tr>
163
+ <tr>
164
+ <td>Shikra (Vicuna-13B)</td>
165
+ <td>-</td>
166
+ <td>73.9</td>
167
+ <td>77.36</td>
168
+ <td>47.16</td>
169
+ <td>-</td>
170
+ <td>-</td>
171
+ <td>-</td>
172
+ </tr>
173
+ <tr>
174
+ <td><strong>Qwen-VL (Qwen-7B)</strong></td>
175
+ <td>121.4</td>
176
+ <td><b>85.8</b></td>
177
+ <td><b>78.8</b></td>
178
+ <td><b>58.6</b></td>
179
+ <td><b>59.3</b></td>
180
+ <td><b>67.1</b></td>
181
+ <td><b>34.3</b></td>
182
+ </tr>
183
+ <tr>
184
+ <td>Qwen-VL (4-shot)</td>
185
+ <td>-</td>
186
+ <td>-</td>
187
+ <td>-</td>
188
+ <td>63.6</td>
189
+ <td>-</td>
190
+ <td>-</td>
191
+ <td>39.1</td>
192
+ </tr>
193
+ <tr>
194
+ <td>Qwen-VL-Chat</td>
195
+ <td>-</td>
196
+ <td>81.5</td>
197
+ <td>-</td>
198
+ <td>56.69</td>
199
+ <td>-</td>
200
+ <td>68.22</td>
201
+ <td>37.05</td>
202
+ </tr>
203
+ <tr>
204
+ <td>Qwen-VL-Chat (4-shot)</td>
205
+ <td>-</td>
206
+ <td>-</td>
207
+ <td>-</td>
208
+ <td>60.6</td>
209
+ <td>-</td>
210
+ <td>-</td>
211
+ <td>45.5</td>
212
+ </tr>
213
+ <tr>
214
+ <td>Previous SOTA<br>(Per Task Fine-tuning)</td>
215
+ <td>-</td>
216
+ <td>127.0<br>(PALI-17B)</td>
217
+ <td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
218
+ <td>86.1<br>(PALI-X<br>-55B)</td>
219
+ <td>66.1<br>(PALI-X<br>-55B)</td>
220
+ <td>72.1<br>(CFR)</td>
221
+ <td>92.53<br>(LLaVa+<br>GPT-4)</td>
222
+ <td>70.9<br>(PALI-X<br>-55B)</td>
223
+ </tr>
224
+ </tbody>
225
+ </table>
226
+
227
+ - For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
228
+ - For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
229
+
230
+ ### Text-based VQA (focuse on text understanding capabilities in images)
231
+
232
+ <table>
233
+ <thead>
234
+ <tr>
235
+ <th>Model type</th>
236
+ <th>Model</th>
237
+ <th>TextVQA</th>
238
+ <th>DocVQA</th>
239
+ <th>ChartQA</th>
240
+ <th>AI2D</th>
241
+ <th>OCR-VQA</th>
242
+ </tr>
243
+ </thead>
244
+ <tbody align="center">
245
+ <tr>
246
+ <td rowspan="5">Generalist Models</td>
247
+ <td>BLIP-2 (Vicuna-13B)</td>
248
+ <td>42.4</td>
249
+ <td>-</td>
250
+ <td>-</td>
251
+ <td>-</td>
252
+ <td>-</td>
253
+ </tr>
254
+ <tr>
255
+ <td>InstructBLIP (Vicuna-13B)</td>
256
+ <td>50.7</td>
257
+ <td>-</td>
258
+ <td>-</td>
259
+ <td>-</td>
260
+ <td>-</td>
261
+ </tr>
262
+ <tr>
263
+ <td>mPLUG-DocOwl (LLaMA-7B)</td>
264
+ <td>52.6</td>
265
+ <td>62.2</td>
266
+ <td>57.4</td>
267
+ <td>-</td>
268
+ <td>-</td>
269
+ </tr>
270
+ <tr>
271
+ <td>Pic2Struct-Large (1.3B)</td>
272
+ <td>-</td>
273
+ <td><b>76.6</b></td>
274
+ <td>58.6</td>
275
+ <td>42.1</td>
276
+ <td>71.3</td>
277
+ </tr>
278
+ <tr>
279
+ <td>Qwen-VL (Qwen-7B)</td>
280
+ <td><b>63.8</b></td>
281
+ <td>65.1</td>
282
+ <td><b>65.7</b></td>
283
+ <td><b>62.3</b></td>
284
+ <td><b>75.7</b></td>
285
+ </tr>
286
+ <tr>
287
+ <td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
288
+ <td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
289
+ <td>71.44</td>
290
+ <td>80.0</td>
291
+ <td>70.0</td>
292
+ <td>81.2</td>
293
+ <td>75.0</td>
294
+ </tr>
295
+ </tbody>
296
+ </table>
297
+
298
+ - In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
299
+ - Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
300
+
301
+ ### Referring Expression Comprehension
302
+ <table>
303
+ <thead>
304
+ <tr>
305
+ <th rowspan="2">Model type</th>
306
+ <th rowspan="2">Model</th>
307
+ <th colspan="3">RefCOCO</th>
308
+ <th colspan="3">RefCOCO+</th>
309
+ <th colspan="2">RefCOCOg</th>
310
+ <th>GRIT</th>
311
+ </tr>
312
+ <tr>
313
+ <th>val</th>
314
+ <th>test-A</th>
315
+ <th>test-B</th>
316
+ <th>val</th>
317
+ <th>test-A</th>
318
+ <th>test-B</th>
319
+ <th>val-u</th>
320
+ <th>test-u</th>
321
+ <th>refexp</th>
322
+ </tr>
323
+ </thead>
324
+ <tbody align="center">
325
+ <tr>
326
+ <td rowspan="8">Generalist Models</td>
327
+ <td>GPV-2</td>
328
+ <td>-</td>
329
+ <td>-</td>
330
+ <td>-</td>
331
+ <td>-</td>
332
+ <td>-</td>
333
+ <td>-</td>
334
+ <td>-</td>
335
+ <td>-</td>
336
+ <td>51.50</td>
337
+ </tr>
338
+ <tr>
339
+ <td>OFA-L*</td>
340
+ <td>79.96</td>
341
+ <td>83.67</td>
342
+ <td>76.39</td>
343
+ <td>68.29</td>
344
+ <td>76.00</td>
345
+ <td>61.75</td>
346
+ <td>67.57</td>
347
+ <td>67.58</td>
348
+ <td>61.70</td>
349
+ </tr>
350
+ <tr>
351
+ <td>Unified-IO</td>
352
+ <td>-</td>
353
+ <td>-</td>
354
+ <td>-</td>
355
+ <td>-</td>
356
+ <td>-</td>
357
+ <td>-</td>
358
+ <td>-</td>
359
+ <td>-</td>
360
+ <td><b>78.61</b></td>
361
+ </tr>
362
+ <tr>
363
+ <td>VisionLLM-H</td>
364
+ <td></td>
365
+ <td>86.70</td>
366
+ <td>-</td>
367
+ <td>-</td>
368
+ <td>-</td>
369
+ <td>-</td>
370
+ <td>-</td>
371
+ <td>-</td>
372
+ <td>-</td>
373
+ </tr>
374
+ <tr>
375
+ <td>Shikra-7B</td>
376
+ <td>87.01</td>
377
+ <td>90.61</td>
378
+ <td>80.24 </td>
379
+ <td>81.60</td>
380
+ <td>87.36</td>
381
+ <td>72.12</td>
382
+ <td>82.27</td>
383
+ <td>82.19</td>
384
+ <td>69.34</td>
385
+ </tr>
386
+ <tr>
387
+ <td>Shikra-13B</td>
388
+ <td>87.83 </td>
389
+ <td>91.11</td>
390
+ <td>81.81</td>
391
+ <td>82.89</td>
392
+ <td>87.79</td>
393
+ <td>74.41</td>
394
+ <td>82.64</td>
395
+ <td>83.16</td>
396
+ <td>69.03</td>
397
+ </tr>
398
+ <tr>
399
+ <td>Qwen-VL-7B</td>
400
+ <td><b>89.36</b></td>
401
+ <td>92.26</td>
402
+ <td><b>85.34</b></td>
403
+ <td><b>83.12</b></td>
404
+ <td>88.25</td>
405
+ <td><b>77.21</b></td>
406
+ <td><b>85.58</b></td>
407
+ <td><b>85.48</b></td>
408
+ <td>78.22</td>
409
+ </tr>
410
+ <tr>
411
+ <td>Qwen-VL-7B-Chat</td>
412
+ <td><b>88.55</b></td>
413
+ <td><b>92.27</b></td>
414
+ <td>84.51</td>
415
+ <td>82.82</td>
416
+ <td><b>88.59</b></td>
417
+ <td>-</td>
418
+ <td>-</td>
419
+ <td>-</td>
420
+ <td>-</td>
421
+ </tr>
422
+ <tr>
423
+ <td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
424
+ <td>G-DINO-L</td>
425
+ <td>90.56&nbsp;&nbsp;</td>
426
+ <td>93.19</td>
427
+ <td>88.24</td>
428
+ <td>82.75</td>
429
+ <td>88.95</td>
430
+ <td>75.92</td>
431
+ <td>86.13</td>
432
+ <td>87.02</td>
433
+ <td>-</td>
434
+ </tr>
435
+ <tr>
436
+ <td>UNINEXT-H</td>
437
+ <td>92.64 </td>
438
+ <td>94.33</td>
439
+ <td>91.46</td>
440
+ <td>85.24</td>
441
+ <td>89.63</td>
442
+ <td>79.79</td>
443
+ <td>88.73</td>
444
+ <td>89.37</td>
445
+ <td>-</td>
446
+ </tr>
447
+ <tr>
448
+ <td>ONE-PEACE</td>
449
+ <td>92.58 </td>
450
+ <td>94.18</td>
451
+ <td>89.26</td>
452
+ <td>88.77</td>
453
+ <td>92.21</td>
454
+ <td>83.23</td>
455
+ <td>89.22</td>
456
+ <td>89.27</td>
457
+ <td>-</td>
458
+ </tr>
459
+ </tbody>
460
+ </table>
461
+
462
+ - Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
463
+ - Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
464
+
465
+ We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
466
+
467
+ ### Chat evaluation
468
+
469
+ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
470
+
471
+ #### English evaluation
472
+
473
+ | Model | Score |
474
+ |---------------|-------|
475
+ | PandaGPT | 488.5 |
476
+ | MiniGPT4 | 531.7 |
477
+ | InstructBLIP | 552.4 |
478
+ | LLaMA-AdapterV2 | 590.1 |
479
+ | mPLUG-Owl | 605.4 |
480
+ | LLaVA | 602.7 |
481
+ | Qwen-VL-Chat | 645.2 |
482
+
483
+ #### Chinese evaluation
484
+
485
+ | Model | Score |
486
+ |---------------|-------|
487
+ | VisualGLM | 247.1 |
488
+ | Qwen-VL-Chat | 401.2 |
489
+
490
+ Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
491
+
492
+ ## Requirements
493
+
494
+ * python 3.8 and above
495
+ * pytorch 1.12 and above, 2.0 and above are recommended
496
+ * CUDA 11.4 and above are recommended (this is for GPU users)
497
+
498
+ ## Quickstart
499
+
500
+ Below, we provide simple examples to show how to use Qwen-VL and Qwen-VL-Chat with 🤖 ModelScope and 🤗 Transformers.
501
+
502
+ Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
503
+
504
+ ```bash
505
+ pip install -r requirements.txt
506
+ ```
507
+
508
+ Now you can start with ModelScope or Transformers. More usage aboue vision encoder, please refer to [FAQ](FAQ.md).
509
+
510
+ #### 🤗 Transformers
511
+
512
+ To use Qwen-VL for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
513
+
514
+ ```python
515
+ from transformers import AutoModelForCausalLM, AutoTokenizer
516
+ from transformers.generation import GenerationConfig
517
+ import torch
518
+ torch.manual_seed(1234)
519
+
520
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL", trust_remote_code=True)
521
+
522
+ # use bf16
523
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, bf16=True).eval()
524
+ # use fp16
525
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, fp16=True).eval()
526
+ # use cpu only
527
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cpu", trust_remote_code=True).eval()
528
+ # use cuda device
529
+ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cuda", trust_remote_code=True).eval()
530
+
531
+ # Specify hyperparameters for generation
532
+ model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL", trust_remote_code=True)
533
+
534
+ query = tokenizer.from_list_format([
535
+ {'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
536
+ {'text': 'Generate the caption in English with grounding:'},
537
+ ])
538
+ inputs = tokenizer(query, return_tensors='pt')
539
+ inputs = inputs.to(model.device)
540
+ pred = model.generate(**inputs)
541
+ response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=False)
542
+ print(response)
543
+ # <img>https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg</img>Generate the caption in English with grounding:<ref> Woman</ref><box>(451,379),(731,806)</box> and<ref> her dog</ref><box>(219,424),(576,896)</box> playing on the beach<|endoftext|>
544
+ image = tokenizer.draw_bbox_on_latest_picture(response)
545
+ if image:
546
+ image.save('2.jpg')
547
+ else:
548
+ print("no box")
549
+ ```
550
+
551
+ <p align="center">
552
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_spotting_caption.jpeg" width="500"/>
553
+ <p>
554
+
555
+
556
+ ## FAQ
557
+
558
+ If you meet problems, please refer to [FAQ](FAQ.md) and the issues first to search a solution before you launch a new issue.
559
+
560
+
561
+ ## License Agreement
562
+
563
+ Researchers and developers are free to use the codes and model weights of both Qwen-7B and Qwen-7B-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
564
+
565
+ ## Contact Us
566
+
567
+ If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
568
+
tokenization_qwen.py CHANGED
@@ -18,6 +18,7 @@ from PIL import Image
18
  from PIL import ImageFont
19
  from PIL import ImageDraw
20
  from transformers import PreTrainedTokenizer, AddedToken
 
21
 
22
  import matplotlib.pyplot as plt
23
  import matplotlib.colors as mcolors
@@ -26,7 +27,7 @@ from matplotlib.font_manager import FontProperties
26
  logger = logging.getLogger(__name__)
27
 
28
 
29
- VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
30
 
31
  PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
32
  ENDOFTEXT = "<|endoftext|>"
@@ -410,20 +411,16 @@ class QWenTokenizer(PreTrainedTokenizer):
410
  if image is None:
411
  return None
412
  if image.startswith("http://") or image.startswith("https://"):
413
- image = Image.open(requests.get(image, stream=True).raw)
 
414
  else:
415
- # image = Image.open(image)
416
  image = plt.imread(image)
417
- # h, w = image.height, image.width
418
- # image = image.convert("RGB")
419
- h, w = image.shape[0], image.shape[1]
420
  visualizer = Visualizer(image)
421
 
422
  boxes = self._fetch_all_box_with_ref(response)
423
  if not boxes:
424
  return None
425
- # fnt = ImageFont.truetype("SimSun.ttf", 50)
426
- # draw = ImageDraw.Draw(image)
427
  color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()]) # init color
428
  for box in boxes:
429
  if 'ref' in box: # random new color for new refexps
@@ -496,6 +493,7 @@ class VisImage:
496
  class Visualizer:
497
  def __init__(self, img_rgb, metadata=None, scale=1.0):
498
  self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
 
499
  self.output = VisImage(self.img, scale=scale)
500
  self.cpu_device = torch.device("cpu")
501
 
@@ -527,7 +525,7 @@ class Visualizer:
527
  y,
528
  text,
529
  size=font_size * self.output.scale,
530
- fontproperties=FontProperties(fname=r"SimSun.ttf"),
531
  bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
532
  verticalalignment="top",
533
  horizontalalignment=horizontal_alignment,
 
18
  from PIL import ImageFont
19
  from PIL import ImageDraw
20
  from transformers import PreTrainedTokenizer, AddedToken
21
+ from transformers.utils import try_to_load_from_cache
22
 
23
  import matplotlib.pyplot as plt
24
  import matplotlib.colors as mcolors
 
27
  logger = logging.getLogger(__name__)
28
 
29
 
30
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken", "ttf": "SimSun.ttf"}
31
 
32
  PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
33
  ENDOFTEXT = "<|endoftext|>"
 
411
  if image is None:
412
  return None
413
  if image.startswith("http://") or image.startswith("https://"):
414
+ image = Image.open(requests.get(image, stream=True).raw).convert("RGB")
415
+ h, w = image.height, image.width
416
  else:
 
417
  image = plt.imread(image)
418
+ h, w = image.shape[0], image.shape[1]
 
 
419
  visualizer = Visualizer(image)
420
 
421
  boxes = self._fetch_all_box_with_ref(response)
422
  if not boxes:
423
  return None
 
 
424
  color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()]) # init color
425
  for box in boxes:
426
  if 'ref' in box: # random new color for new refexps
 
493
  class Visualizer:
494
  def __init__(self, img_rgb, metadata=None, scale=1.0):
495
  self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
496
+ self.font_path = try_to_load_from_cache("Qwen/Qwen-VL-Chat", "SimSun.ttf")
497
  self.output = VisImage(self.img, scale=scale)
498
  self.cpu_device = torch.device("cpu")
499
 
 
525
  y,
526
  text,
527
  size=font_size * self.output.scale,
528
+ fontproperties=FontProperties(fname=self.font_path),
529
  bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
530
  verticalalignment="top",
531
  horizontalalignment=horizontal_alignment,