chenlin commited on
Commit
a9beb09
β€’
1 Parent(s): ddafb6f
Files changed (2) hide show
  1. README.md +107 -1
  2. mmstar.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,109 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - multiple-choice
4
+ - question-answering
5
+ - visual-question-answering
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - 1K<n<10K
10
+ configs:
11
+ - config_name: mmstar
12
+ data_files:
13
+ - split: val
14
+ path: "mmstar.parquet"
15
+ dataset_info:
16
+ - config_name: mmstar
17
+ features:
18
+ - name: index
19
+ dtype: int64
20
+ - name: question
21
+ dtype: string
22
+ - name: image
23
+ dtype: image
24
+ - name: answer
25
+ dtype: string
26
+ - name: category
27
+ dtype: string
28
+ - name: l2_category
29
+ dtype: string
30
+ - name: meta_info
31
+ struct:
32
+ - name: source
33
+ dtype: string
34
+ - name: split
35
+ dtype: string
36
+ - name: image_path
37
+ dtype: string
38
+ splits:
39
+ - name: val
40
+ num_bytes: 44831593
41
+ num_examples: 1500
42
  ---
43
+
44
+ # MMStar (Are We on the Right Way for Evaluating Large Vision-Language Models?)
45
+
46
+ [**🌐 Homepage**](https://mmstar-benchmark.github.io/) | [**πŸ€— Dataset**](https://huggingface.co/datasets/Lin-Chen/MMStar) | [**πŸ€— Paper**](https://huggingface.co/papers/2403.20330) | [**πŸ“– arXiv**](https://arxiv.org/pdf/2403.20330.pdf) | [**GitHub**](https://github.com/MMStar-Benchmark/MMStar)
47
+
48
+ ## Dataset Details
49
+
50
+ As shown in the figure below, existing benchmarks lack consideration of the vision dependency of evaluation samples and potential data leakage from LLMs' and LVLMs' training data.
51
+
52
+ <p align="center">
53
+ <img src="https://raw.githubusercontent.com/MMStar-Benchmark/MMStar/main/resources/4_case_in_1.png" width="80%"> <br>
54
+ </p>
55
+
56
+ Therefore, we introduce MMStar: an elite vision-indispensible multi-modal benchmark, aiming to ensure each curated sample exhibits **visual dependency**, **minimal data leakage**, and **requires advanced multi-modal capabilities**.
57
+
58
+ 🎯 **We have released a full set comprising 1500 offline-evaluating samples.** After applying the coarse filter process and manual review, we narrow down from a total of 22,401 samples to 11,607 candidate samples and finally select 1,500 high-quality samples to construct our MMStar benchmark.
59
+
60
+ <p align="center">
61
+ <img src="https://raw.githubusercontent.com/MMStar-Benchmark/MMStar/main/resources/data_source.png" width="80%"> <br>
62
+ </p>
63
+
64
+ In MMStar, we display **6 core capabilities** in the inner ring, with **18 detailed axes** presented in the outer ring. The middle ring showcases the number of samples for each detailed dimension. Each core capability contains a meticulously **balanced 250 samples**. We further ensure a relatively even distribution across the 18 detailed axes.
65
+
66
+ <p align="center">
67
+ <img src="https://raw.githubusercontent.com/MMStar-Benchmark/MMStar/main/resources/mmstar.png" width="60%"> <br>
68
+ </p>
69
+
70
+ ## πŸ† Mini-Leaderboard
71
+ We show a mini-leaderboard here and please find more information in our paper or [homepage](https://mmstar-benchmark.github.io/).
72
+
73
+ | Model | Acc. | MG ⬆ | ML ⬇ |
74
+ |----------------------------|:---------:|:------------:|:------------:|
75
+ | GPT4V (high)| **57.1** | **43.6** | 1.3 |
76
+ | InternLM-Xcomposer2| 55.4 | 28.1 | 7.5|
77
+ | LLaVA-Next-34B |52.1|29.4|2.4|
78
+ |GPT4V (low)|46.1|32.6|1.3|
79
+ |InternVL-Chat-v1.2|43.7|32.6|**0.0**|
80
+ |GeminiPro-Vision|42.6|27.4|**0.0**|
81
+ |Sphinx-X-MoE|38.9|14.8|1.0|
82
+ |Monkey-Chat|38.3|13.5|17.6|
83
+ |Yi-VL-6B|37.9|15.6|**0.0**|
84
+ |Qwen-VL-Chat|37.5|23.9|**0.0**|
85
+ |Deepseek-VL-7B|37.1|15.7|**0.0**|
86
+ |CogVLM-Chat|36.5|14.9|**0.0**|
87
+ |Yi-VL-34B|36.1|18.8|**0.0**|
88
+ |TinyLLaVA|36.0|16.4|7.6|
89
+ |ShareGPT4V-7B|33.0|11.9|**0.0**|
90
+ |LLaVA-1.5-13B|32.8|13.9|**0.0**|
91
+ |LLaVA-1.5-7B|30.3|10.7|**0.0**|
92
+ |Random Choice|24.6|-|-|
93
+
94
+ ## πŸ“§ Contact
95
+
96
+ - [Lin Chen](https://lin-chen.site/): chlin@mail.ustc.edu.cn
97
+ - [Jinsong Li](https://li-jinsong.github.io/): lijingsong@pjlab.org.cn
98
+
99
+ ## βœ’οΈ Citation
100
+
101
+ If you find our work helpful for your research, please consider giving a star ⭐ and citation πŸ“
102
+ ```bibtex
103
+ @article{chen2024right,
104
+ title={Are We on the Right Way for Evaluating Large Vision-Language Models?},
105
+ author={Chen, Lin and Li, Jinsong and Dong, Xiaoyi and Zhang, Pan and Zang, Yuhang and Chen, Zehui and Duan, Haodong and Wang, Jiaqi and Qiao, Yu and Lin, Dahua and Zhao, Feng},
106
+ journal={arXiv preprint arXiv:2403.20330},
107
+ year={2024}
108
+ }
109
+ ```
mmstar.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29afd74b0134cfab083a8909b5358577ab18fd41c1e612031577cfb3635531c2
3
+ size 41798712