JustinLin610 commited on
Commit
313f3a0
1 Parent(s): 7595aae

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +210 -119
README.md CHANGED
@@ -18,61 +18,207 @@ inference: false
18
  <br>
19
 
20
  <p align="center">
21
- Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>&nbsp | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>&nbsp | &nbsp<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>&nbsp | &nbsp<a href="https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md">Report</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/9bjvspyu">Discord</a>
22
-
 
23
  </p>
24
  <br>
25
 
26
- **Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型的特点包括:
27
- - **强大的性能**:在四大类多模态任务的标准英文测评中(Zero-shot Caption/VQA/DocVQA/Grounding)上,均取得同等通用模型大小下最好效果;
28
- - **多语言对话模型**:天然支持多语言对话,端到端支持图片里中英双语的长文本识别;
29
- - **多图交错对话**:支持多图输入和比较,指定图片问答,多图文学创作等;
30
- - **首个支持中文开放域定位的通用模型**:通过中文开放域语言表达进行检测框标注;
31
- - **细粒度识别和理解**:相比于目前其它开源LVLM使用的224分辨率,Qwen-VL是首个开源的448分辨率的LVLM模型。更高分辨率可以提升细粒度的文字识别、文档问答和检测框标注。
32
 
33
  **Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
34
- - **Strong performance**: It significantly surpasses existing open-source Large Vision Language Models (LVLM) under similar scale settings on multiple English evaluation benchmarks (including Zero-shot caption, VQA, DocVQA, and Grounding).
35
- - **Multi-lingual LVLM supporting text recognization**: Qwen-VL naturally supports multi-lingual conversation, and it promotes end-to-end recognition of Chinese and English bi-lingual text in images.
36
- - **Multi-image interleaved conversations**: This feature allows for the input and comparison of multiple images, as well as the ability to specify questions related to the images and engage in multi-image storytelling.
37
- - **First generalist model support grounding in Chinese**: Detecting bounding boxes through open-domain language expression in both Chinese and English.
38
- - **Fine-grained recognization and understanding**: Compared to the 224 resolution currently used by other open-source LVLM, the 448 resolution promotes fine-grained text recognition, document QA, and bounding box annotation.
39
 
40
- 目前,我们提供了 Qwen-VL 系列的两个模型:
41
- - Qwen-VL: Qwen-VL 以 Qwen-7B 的预训练模型作为语言模型的初始化,并以 [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) 作为视觉编码器的初始化,中间加入单层随机初始化的 cross-attention,经过约1.5B的图文数据训练得到。最终图像输入分辨率为448。
42
- - Qwen-VL-Chat: Qwen-VL 的基础上,我们使用对齐机制打造了基于大语言模型的视觉AI助手Qwen-VL-Chat,其训练数据涵盖了 QWen-7B 的纯文本 SFT 数据、开源 LVLM SFT 数据、数据合成和人工标注的图文对齐数据。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- 如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。
 
 
 
45
 
46
- We release two models of the Qwen-VL series:
47
- - Qwen-VL: The pre-trained LVLM model uses Qwen-7B as the initialization of the LLM, and [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) as the initialization of the visual encoder. And connects them with a randomly initialized cross-attention layer. Qwen-VL was trained on about 1.5B image-text paired data.
48
- - Qwen-VL-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques.
49
 
50
- For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ## 评测
53
 
54
  我们从两个角度评测了两个模型的能力:
55
- 1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
56
- - Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
57
- - General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
58
- - Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
59
- - Referring Expression Compression:评测模型给定物体描述画检测框的能力;
60
 
 
 
 
 
 
 
61
  2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
62
- - 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
63
- - 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
64
- - 评测同时包含英文版本和中文版本。
65
-
 
66
  评测结果如下:
67
 
68
  We evaluated the model's ability from two perspectives:
 
69
  1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
 
70
  - Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
71
  - General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
72
  - Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
73
  - Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
74
-
75
  2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
 
76
  - The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
77
  - In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
78
  - The benchmark includes both English and Chinese versions.
@@ -85,7 +231,8 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
85
  <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
86
  <p>
87
 
88
- ### Zero-shot Captioning & General VQA
 
89
  <table>
90
  <thead>
91
  <tr>
@@ -242,11 +389,10 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
242
 
243
  - 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
244
  - 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
245
-
246
  - For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
247
  - For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
248
 
249
- ### Text-oriented VQA (focuse on text understanding capabilities in images)
250
 
251
  <table>
252
  <thead>
@@ -316,11 +462,11 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
316
 
317
  - 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
318
  - 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
319
-
320
  - In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
321
  - Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
322
 
323
- ### Referring Expression Comprehension
 
324
  <table>
325
  <thead>
326
  <tr>
@@ -490,13 +636,13 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
490
 
491
  We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
492
 
493
- ### Chat evaluation
494
 
495
  TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
496
 
497
  TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
498
 
499
- #### English evaluation
500
 
501
  | Model | Score |
502
  |---------------|-------|
@@ -508,7 +654,7 @@ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities o
508
  | LLaVA | 602.7 |
509
  | Qwen-VL-Chat | 645.2 |
510
 
511
- #### Chinese evaluation
512
 
513
  | Model | Score |
514
  |---------------|-------|
@@ -518,100 +664,45 @@ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities o
518
  Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
519
 
520
  Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
 
521
 
522
- ## Requirements
523
-
524
- * python 3.8及以上版本
525
- * pytorch 1.12及以上版本,推荐2.0及以上版本
526
- * 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
527
-
528
- * python 3.8 and above
529
- * pytorch 1.12 and above, 2.0 and above are recommended
530
- * CUDA 11.4 and above are recommended (this is for GPU users)
531
-
532
- ## Quickstart
533
-
534
- 我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用 Qwen-VL 和 Qwen-VL-Chat。
535
-
536
- 在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
537
-
538
- Below, we provide simple examples to show how to use Qwen-VL and Qwen-VL-Chat with 🤗 Transformers.
539
-
540
- Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
541
-
542
- ```bash
543
- pip install -r requirements.txt
544
- ```
545
-
546
- 接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
547
 
548
- Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
549
 
550
- #### 🤗 Transformers
 
551
 
552
- To use Qwen-VL-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
553
 
554
- ```python
555
- from transformers import AutoModelForCausalLM, AutoTokenizer
556
- from transformers.generation import GenerationConfig
557
- import torch
558
- torch.manual_seed(1234)
559
 
560
- # Note: The default behavior now has injection attack prevention off.
561
- tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
562
 
563
- # use bf16
564
- # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
565
- # use fp16
566
- # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
567
- # use cpu only
568
- # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cpu", trust_remote_code=True).eval()
569
- # use cuda device
570
- model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", trust_remote_code=True).eval()
571
 
572
- # Specify hyperparameters for generation
573
- model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
574
 
575
- # 1st dialogue turn
576
- query = tokenizer.from_list_format([
577
- {'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
578
- {'text': '这是什么'},
579
- ])
580
- response, history = model.chat(tokenizer, query=query, history=None)
581
- print(response)
582
- # 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
583
 
584
- # 2st dialogue turn
585
- response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
586
- print(response)
587
- # <ref>击掌</ref><box>(517,508),(589,611)</box>
588
- image = tokenizer.draw_bbox_on_latest_picture(response, history)
589
- if image:
590
- image.save('1.jpg')
591
- else:
592
- print("no box")
593
  ```
 
594
 
595
- <p align="center">
596
- <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
597
- <p>
598
-
599
- ## FAQ
600
-
601
- 如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
602
-
603
- If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
604
-
605
-
606
- ## License Agreement
607
-
608
- 研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
609
-
610
- Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
611
-
612
- ## Contact Us
613
 
614
  如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
615
 
616
  If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
617
 
 
 
 
 
 
18
  <br>
19
 
20
  <p align="center">
21
+ Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>&nbsp | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>&nbsp | Qwen-VL-Chat-Int4 <a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
22
+ <br>
23
+ <a href="assets/wechat.png">WeChat</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>&nbsp | &nbsp<a href="https://arxiv.org/abs/2308.12966">Report</a>
24
  </p>
25
  <br>
26
 
27
+ **Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
 
 
 
 
 
28
 
29
  **Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
 
 
 
 
 
30
 
31
+ 目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat仓库。
32
+
33
+ We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat.
34
+ <br>
35
+
36
+ ## 安装要求 (Requirements)
37
+
38
+ * python 3.8及以上版本
39
+ * pytorch 1.12及以上版本,推荐2.0及以上版本
40
+ * 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
41
+ * python 3.8 and above
42
+ * pytorch 1.12 and above, 2.0 and above are recommended
43
+ * CUDA 11.4 and above are recommended (this is for GPU users)
44
+ <br>
45
+
46
+ ## 快速开始 (Quickstart)
47
+
48
+ 我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat。
49
+
50
+ 在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
51
+
52
+ Below, we provide simple examples to show how to use Qwen-VL-Chat with 🤗 Transformers.
53
+
54
+ Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
55
+
56
+ ```bash
57
+ pip install -r requirements.txt
58
+ ```
59
+
60
+ 接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
61
+
62
+ Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
63
+
64
+ #### 🤗 Transformers
65
+
66
+ To use Qwen-VL-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
67
+
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ from transformers.generation import GenerationConfig
71
+ import torch
72
+ torch.manual_seed(1234)
73
+
74
+ # Note: The default behavior now has injection attack prevention off.
75
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
76
+
77
+ # use bf16
78
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
79
+ # use fp16
80
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
81
+ # use cpu only
82
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cpu", trust_remote_code=True).eval()
83
+ # use cuda device
84
+ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", trust_remote_code=True).eval()
85
+
86
+ # Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
87
+ # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
88
+
89
+ # 1st dialogue turn
90
+ query = tokenizer.from_list_format([
91
+ {'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
92
+ {'text': '这是什么'},
93
+ ])
94
+ response, history = model.chat(tokenizer, query=query, history=None)
95
+ print(response)
96
+ # 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
97
+
98
+ # 2nd dialogue turn
99
+ response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
100
+ print(response)
101
+ # <ref>击掌</ref><box>(517,508),(589,611)</box>
102
+ image = tokenizer.draw_bbox_on_latest_picture(response, history)
103
+ if image:
104
+ image.save('1.jpg')
105
+ else:
106
+ print("no box")
107
+ ```
108
+
109
+ <p align="center">
110
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
111
+ <p>
112
+ <br>
113
+
114
+ ## 量化 (Quantization)
115
+
116
+ ### 用法 (Usage)
117
+
118
+ 当前我们提供了基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化方案,并提供了Qwen-VL-Chat的Int4量化版本Qwen-VL-Chat-Int4 [点击此处](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4)。该模型在效果评测上几乎无损,并在显存占用和推理速度上具有明显优势。
119
+
120
+ 下文说明如何使用该量化模型。开始之前,请确保你满足要求(如torch2.0及以上、transformers 4.32.0及以上,等)并安装所需的代码库:
121
+
122
+ We provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-VL-Chat, Qwen-VL-Chat-Int4 [Click here](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed.
123
+
124
+ Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
125
+
126
+ ```bash
127
+ pip install optimum
128
+ git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
129
+ pip install -v .
130
+ ```
131
+
132
+ 如遇到安装 `auto-gptq` 的问题,建议您前往官方[repo](https://github.com/PanQiWei/AutoGPTQ) 寻找合适的wheel。
133
+
134
+ 随后你便可以按照上述用法,轻松调用量化模型:
135
+
136
+ If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a wheel.
137
+
138
+ Then you can load the quantized model easily and run inference as same as usual:
139
+
140
+ ```python
141
+ model = AutoModelForCausalLM.from_pretrained(
142
+ "Qwen/Qwen-VL-Chat-Int4",
143
+ device_map="auto",
144
+ trust_remote_code=True
145
+ ).eval()
146
+ # Either a local path or an u[](https://)rl between <img></img> tags.
147
+ image_path = 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'
148
+ response, history = model.chat(tokenizer, query=f'<img>{image_path}</img>这是什么', history=None)
149
+ print(response)
150
+ ```
151
+
152
+ ### 效果评测 (Performance)
153
+
154
+ 我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
155
+
156
+ We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
157
+
158
+ | Quantization | ZH. | EN |
159
+ | ------------ | :--------: | :-----------: |
160
+ | BF16 | 401.2 | 645.2 |
161
+ | Int4 | 386.6 | 651.4 |
162
+
163
+ ### 推理速度 (Inference Speed)
164
+
165
+ 我们测算了在输入一张图片(即258个token)的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
166
+
167
+ We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
168
 
169
+ | Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
170
+ | ------------ | :-----------------: | :-----------------: |
171
+ | BF16 | 28.87 | 24.32 |
172
+ | Int4 | 37.79 | 34.34 |
173
 
174
+ 推理速度测算是在单卡 A100-SXM4-80G GPU上运行,使用PyTorch 2.0.1及CUDA 11.4。
 
 
175
 
176
+ The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
177
+
178
+ ### GPU显存占用 (GPU Memory Usage)
179
+
180
+ 我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示:
181
+
182
+ We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
183
+
184
+ | Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
185
+ | ------------ | :---------------------------------: | :-----------------------------------: |
186
+ | BF16 | 22.60GB | 28.01GB |
187
+ | Int4 | 11.82GB | 17.23GB |
188
+
189
+ 上述速度和显存测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
190
+
191
+ The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
192
+ <br>
193
 
194
  ## 评测
195
 
196
  我们从两个角度评测了两个模型的能力:
 
 
 
 
 
197
 
198
+ 1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
199
+
200
+ - Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
201
+ - General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
202
+ - Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
203
+ - Referring Expression Compression:评测模型给定物体描述画检测框的能力;
204
  2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
205
+
206
+ - 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
207
+ - 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
208
+ - 评测同时包含英文版本和中文版本。
209
+
210
  评测结果如下:
211
 
212
  We evaluated the model's ability from two perspectives:
213
+
214
  1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
215
+
216
  - Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
217
  - General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
218
  - Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
219
  - Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
 
220
  2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
221
+
222
  - The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
223
  - In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
224
  - The benchmark includes both English and Chinese versions.
 
231
  <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
232
  <p>
233
 
234
+ ### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
235
+
236
  <table>
237
  <thead>
238
  <tr>
 
389
 
390
  - 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
391
  - 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
 
392
  - For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
393
  - For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
394
 
395
+ ### 文本导向的视觉问答 (Text-oriented VQA)
396
 
397
  <table>
398
  <thead>
 
462
 
463
  - 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
464
  - 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
 
465
  - In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
466
  - Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
467
 
468
+ ### 细粒度视觉定位 (Referring Expression Comprehension)
469
+
470
  <table>
471
  <thead>
472
  <tr>
 
636
 
637
  We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
638
 
639
+ ### 闲聊能力测评 (Chat Evaluation)
640
 
641
  TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
642
 
643
  TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
644
 
645
+ #### 英语 (English)
646
 
647
  | Model | Score |
648
  |---------------|-------|
 
654
  | LLaVA | 602.7 |
655
  | Qwen-VL-Chat | 645.2 |
656
 
657
+ #### 中文 (Chinese)
658
 
659
  | Model | Score |
660
  |---------------|-------|
 
664
  Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
665
 
666
  Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
667
+ <br>
668
 
669
+ ## 常见问题 (FAQ)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
670
 
671
+ 如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
672
 
673
+ If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
674
+ <br>
675
 
676
+ ## 使用协议 (License Agreement)
677
 
678
+ 研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
 
 
 
 
679
 
680
+ Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
681
+ <br>
682
 
683
+ ## 引用 (Citation)
 
 
 
 
 
 
 
684
 
685
+ 如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
 
686
 
687
+ If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
 
 
 
 
 
 
 
688
 
689
+ ```BibTeX
690
+ @article{Qwen-VL,
691
+ title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
692
+ author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
693
+ journal={arXiv preprint arXiv:2308.12966},
694
+ year={2023}
695
+ }
 
 
696
  ```
697
+ <br>
698
 
699
+ ## 联系我们 (Contact Us)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
700
 
701
  如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
702
 
703
  If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
704
 
705
+ ```
706
+
707
+ ```
708
+