Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ inference: false
|
|
19 |
<img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
|
20 |
</div>
|
21 |
<p align="center">
|
22 |
-
π <a href="resources/WECHAT.md" target="_blank">Wechat</a> Β· π‘<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> Β· π<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a>
|
23 |
</p>
|
24 |
<p align="center">
|
25 |
πExperience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
|
@@ -50,20 +50,20 @@ You can see the details of the CogVLM2 family of open source models in the table
|
|
50 |
|
51 |
Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
|
52 |
|
53 |
-
| Model
|
54 |
-
|
55 |
-
| CogVLM1.1
|
56 |
-
| LLaVA-1.5
|
57 |
-
| Mini-Gemini
|
58 |
-
| LLaVA-NeXT-LLaMA3
|
59 |
-
| LLaVA-NeXT-110B
|
60 |
-
| InternVL-1.5
|
61 |
-
| QwenVL-Plus
|
62 |
-
| Claude3-Opus
|
63 |
-
| Gemini Pro 1.5
|
64 |
-
| GPT-4V
|
65 |
-
| CogVLM2-LLaMA3
|
66 |
-
| CogVLM2-LLaMA3-Chinese
|
67 |
|
68 |
All reviews were obtained without using any external OCR tools ("pixel only").
|
69 |
## Quick Start
|
@@ -159,6 +159,15 @@ This model is released under the CogVLM2 [LICENSE](LICENSE). For models built wi
|
|
159 |
If you find our work helpful, please consider citing the following papers
|
160 |
|
161 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
@misc{wang2023cogvlm,
|
163 |
title={CogVLM: Visual Expert for Pretrained Language Models},
|
164 |
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
|
|
|
19 |
<img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
|
20 |
</div>
|
21 |
<p align="center">
|
22 |
+
π <a href="resources/WECHAT.md" target="_blank">Wechat</a> Β· π‘<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> Β· π<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a> Β· π <a href="https://arxiv.org/pdf/2408.16500" target="_blank">Paper</a>
|
23 |
</p>
|
24 |
<p align="center">
|
25 |
πExperience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
|
|
|
50 |
|
51 |
Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
|
52 |
|
53 |
+
| Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | VCR_EASY | VCR_HARD | MMMU | MMVet | MMBench |
|
54 |
+
|----------------------------|-------------|----------|----------|----------|----------|----------|-------------|-------------|----------|----------|----------|
|
55 |
+
| CogVLM1.1 | β
| 7B | 69.7 | - | 68.3 | 590 | 73.9 | 34.6 | 37.3 | 52.0 | 65.8 |
|
56 |
+
| LLaVA-1.5 | β
| 13B | 61.3 | - | - | 337 | - | - | 37.0 | 35.4 | 67.7 |
|
57 |
+
| Mini-Gemini | β
| 34B | 74.1 | - | - | - | - | - | 48.0 | 59.3 | 80.6 |
|
58 |
+
| LLaVA-NeXT-LLaMA3 | β
| 8B | - | 78.2 | 69.5 | - | - | - | 41.7 | - | 72.1 |
|
59 |
+
| LLaVA-NeXT-110B | β
| 110B | - | 85.7 | 79.7 | - | - | - | 49.1 | - | 80.5 |
|
60 |
+
| InternVL-1.5 | β
| 20B | 80.6 | 90.9 | **83.8** | 720 | 14.7 | 2.0 | 46.8 | 55.4 | **82.3** |
|
61 |
+
| QwenVL-Plus | β | - | 78.9 | 91.4 | 78.1 | 726 | - | - | 51.4 | 55.7 | 67.0 |
|
62 |
+
| Claude3-Opus | β | - | - | 89.3 | 80.8 | 694 | 63.85 | 37.8 | **59.4** | 51.7 | 63.3 |
|
63 |
+
| Gemini Pro 1.5 | β | - | 73.5 | 86.5 | 81.3 | - | 62.73 | 28.1 | 58.5 | - | - |
|
64 |
+
| GPT-4V | β | - | 78.0 | 88.4 | 78.5 | 656 | 52.04 | 25.8 | 56.8 | **67.7** | 75.0 |
|
65 |
+
| **CogVLM2-LLaMA3** | β
| 8B | 84.2 | **92.3** | 81.0 | 756 | **83.3** | **38.0** | 44.3 | 60.4 | 80.5 |
|
66 |
+
| **CogVLM2-LLaMA3-Chinese** | β
| 8B | **85.0** | 88.4 | 74.7 | **780** | 79.9 | 25.1 | 42.8 | 60.5 | 78.9 |
|
67 |
|
68 |
All reviews were obtained without using any external OCR tools ("pixel only").
|
69 |
## Quick Start
|
|
|
159 |
If you find our work helpful, please consider citing the following papers
|
160 |
|
161 |
```
|
162 |
+
@misc{hong2024cogvlm2,
|
163 |
+
title={CogVLM2: Visual Language Models for Image and Video Understanding},
|
164 |
+
author={Hong, Wenyi and Wang, Weihan and Ding, Ming and Yu, Wenmeng and Lv, Qingsong and Wang, Yan and Cheng, Yean and Huang, Shiyu and Ji, Junhui and Xue, Zhao and others},
|
165 |
+
year={2024}
|
166 |
+
eprint={2408.16500},
|
167 |
+
archivePrefix={arXiv},
|
168 |
+
primaryClass={cs.CV}
|
169 |
+
}
|
170 |
+
|
171 |
@misc{wang2023cogvlm,
|
172 |
title={CogVLM: Visual Expert for Pretrained Language Models},
|
173 |
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
|