zRzRzRzRzRzRzR commited on
Commit
c95db64
β€’
1 Parent(s): 3275a51

fix wrong image

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -16,7 +16,7 @@ inference: false
16
  # CogVLM2
17
 
18
  <div align="center">
19
- <img src=https://github.com/THUDM/CogVLM2/blob/main/resources/logo.svg width="40%"/>
20
  </div>
21
  <p align="center">
22
  πŸ‘‹ Join us on <a href="https://github.com/THUDM/CogVLM2/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
@@ -52,6 +52,7 @@ Our open source models have achieved good results in many lists compared to the
52
 
53
  | Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | MMMU | MMVet | MMBench |
54
  |--------------------------------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
 
55
  | LLaVA-1.5 | βœ… | 13B | 61.3 | - | - | 337 | 37.0 | 35.4 | 67.7 |
56
  | Mini-Gemini | βœ… | 34B | 74.1 | - | - | - | 48.0 | 59.3 | 80.6 |
57
  | LLaVA-NeXT-LLaMA3 | βœ… | 8B | - | 78.2 | 69.5 | - | 41.7 | - | 72.1 |
@@ -61,7 +62,7 @@ Our open source models have achieved good results in many lists compared to the
61
  | Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | **59.4** | 51.7 | 63.3 |
62
  | Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 58.5 | - | - |
63
  | GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 56.8 | **67.7** | 75.0 |
64
- | CogVLM1.1 (Ours) | βœ… | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 |
65
  | CogVLM2-LLaMA3 (Ours) | βœ… | 8B | 84.2 | **92.3** | 81.0 | 756 | 44.3 | 60.4 | 80.5 |
66
  | CogVLM2-LLaMA3-Chinese (Ours) | βœ… | 8B | **85.0** | 88.4 | 74.7 | **780** | 42.8 | 60.5 | 78.9 |
67
 
 
16
  # CogVLM2
17
 
18
  <div align="center">
19
+ <img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
20
  </div>
21
  <p align="center">
22
  πŸ‘‹ Join us on <a href="https://github.com/THUDM/CogVLM2/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
 
52
 
53
  | Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | MMMU | MMVet | MMBench |
54
  |--------------------------------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
55
+ | CogVLM1.1 | βœ… | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 |
56
  | LLaVA-1.5 | βœ… | 13B | 61.3 | - | - | 337 | 37.0 | 35.4 | 67.7 |
57
  | Mini-Gemini | βœ… | 34B | 74.1 | - | - | - | 48.0 | 59.3 | 80.6 |
58
  | LLaVA-NeXT-LLaMA3 | βœ… | 8B | - | 78.2 | 69.5 | - | 41.7 | - | 72.1 |
 
62
  | Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | **59.4** | 51.7 | 63.3 |
63
  | Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 58.5 | - | - |
64
  | GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 56.8 | **67.7** | 75.0 |
65
+ | CogVLM1.1 | βœ… | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 |
66
  | CogVLM2-LLaMA3 (Ours) | βœ… | 8B | 84.2 | **92.3** | 81.0 | 756 | 44.3 | 60.4 | 80.5 |
67
  | CogVLM2-LLaMA3-Chinese (Ours) | βœ… | 8B | **85.0** | 88.4 | 74.7 | **780** | 42.8 | 60.5 | 78.9 |
68