Commit
•
2d902e6
1
Parent(s):
f0a132a
dataset_card (#1)
Browse files- [Feature]: Add dataset card (b6d4c83efdf411dc4cd1307fb745036b03e97d56)
Co-authored-by: Yuan LIU <CopyPaste001@users.noreply.huggingface.co>
README.md
CHANGED
@@ -32,4 +32,91 @@ dataset_info:
|
|
32 |
---
|
33 |
# Dataset Card for "MMBench_dev"
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
---
|
33 |
# Dataset Card for "MMBench_dev"
|
34 |
|
35 |
+
## Dataset Description
|
36 |
+
|
37 |
+
* **Homepage**: https://opencompass.org.cn/mmbench
|
38 |
+
* **Repository**: https://github.com/internLM/OpenCompass/
|
39 |
+
* **Paper**: https://arxiv.org/abs/2307.06281
|
40 |
+
* **Leaderboard**: https://opencompass.org.cn/leaderboard-multimodal
|
41 |
+
* **Point of Contact**: opencompass@pjlab.org.cn
|
42 |
+
|
43 |
+
### Dataset Summary
|
44 |
+
|
45 |
+
In recent years, the field has seen a surge in the development of numerous vision-language (VL) models, such as MiniGPT-4 and LLaVA. These models showcase promising performance in tackling previously challenging tasks. However, effectively evaluating these models' performance has become a primary challenge hindering further advancement in large VL models. Traditional benchmarks like VQAv2 and COCO Caption are widely used to provide quantitative evaluations for VL models but suffer from several shortcomings:
|
46 |
+
|
47 |
+
Dataset Construction: Dataset Construction: Traditional benchmarks tend to evaluate models based on their performance in various tasks, such as image captioning and visual question answering. Unfortunately, these tasks do not fully capture the fine-grained abilities that a model possesses, potentially impeding future optimization efforts.
|
48 |
+
|
49 |
+
Evaluation Metrics: Existing evaluation metrics lack robustness. For example, VQAv2 targets a single word or phrase, while many current VL models generate sentences as outputs. Although these sentences may correctly answer the corresponding questions, the existing evaluation metric would assign a Fail score due to an inability to exactly match the given answer. Moreover, recently proposed subjective evaluation metrics, such as that used in mPLUG-Owl, offer comprehensive evaluation of VL models. However, these metrics struggle to scale smoothly due to the significant amount of human labor required for evaluation. Additionally, these evaluations are highly biased and difficult to reproduce.
|
50 |
+
|
51 |
+
To address these limitations, we propose a novel approach by defining a set of fine-grained abilities and collecting relevant questions for each ability. We also introduce innovative evaluation strategies to ensure more robust assessment of model predictions. This new benchmark, called MMBench, boasts the following features:
|
52 |
+
|
53 |
+
Data Collection: To date, we have gathered approximately 3000 questions spanning 20 ability dimensions. Each question is a multiple-choice format with a single correct answer.
|
54 |
+
|
55 |
+
Evaluation: For a more reliable evaluation, we employ ChatGPT to match a model's prediction with the choices of a question, and then output the corresponding label (A, B, C, D) as the final prediction.
|
56 |
+
|
57 |
+
### Languages
|
58 |
+
|
59 |
+
All of our questions are presented in single-choice question format, with the number of options ranging from 2 to 4. In addition, all these questions, options, and answers are in English.
|
60 |
+
|
61 |
+
## Dataset Structure
|
62 |
+
|
63 |
+
### Data Instances
|
64 |
+
|
65 |
+
We provide a overview of an instance in MMBench as follows:
|
66 |
+
|
67 |
+
```text
|
68 |
+
{
|
69 |
+
'index': 241,
|
70 |
+
'question': 'Identify the question that Madelyn and Tucker's experiment can best answer.',
|
71 |
+
'hint': 'The passage below describes an experiment. Read the passage and then follow the
|
72 |
+
instructions below.\n\nMadelyn applied a thin layer of wax to the underside of her
|
73 |
+
snowboard and rode the board straight down a hill. Then, she removed the wax and rode
|
74 |
+
the snowboard straight down the hill again. She repeated the rides four more times,
|
75 |
+
alternating whether she rode with a thin layer of wax on the board or not. Her friend
|
76 |
+
Tucker timed each ride. Madelyn and Tucker calculated the average time it took to slide
|
77 |
+
straight down the hill on the snowboard with wax compared to the average time on the
|
78 |
+
snowboard without wax.\nFigure: snowboarding down a hill.'
|
79 |
+
'A': 'Does Madelyn's snowboard slide down a hill in less time when it has a thin layer of wax or
|
80 |
+
a thick layer of wax?'
|
81 |
+
'B': 'Does Madelyn's snowboard slide down a hill in less time when it has a layer of wax or
|
82 |
+
when it does not have a layer of wax?'
|
83 |
+
'image': xxxxxx,
|
84 |
+
'category': 'identity_reasoning',
|
85 |
+
'l2-category': 'attribute_reasoning',
|
86 |
+
'split': 'dev',
|
87 |
+
'source': 'scienceqa',
|
88 |
+
}
|
89 |
+
```
|
90 |
+
|
91 |
+
### Data Fields
|
92 |
+
|
93 |
+
* `index`: the index of the instance in the dataset.
|
94 |
+
* `question`: the question of the instance.
|
95 |
+
* `hint (optional)`: the hint of the instance.
|
96 |
+
* `A`: the first option of the instance.
|
97 |
+
* `B`: the second option of the instance.
|
98 |
+
* `C (optional)`: the third option of the instance.
|
99 |
+
* `D (optional)`: the fourth option of the instance.
|
100 |
+
* `image`: the raw image of the instance.
|
101 |
+
* `category`: the leaf category of the instance.
|
102 |
+
* `l2-category`: the L-2 category of the instance.
|
103 |
+
* `split`: the split of the instance.
|
104 |
+
* `source`: the source of the instance comes from.
|
105 |
+
|
106 |
+
|
107 |
+
### Data Splits
|
108 |
+
|
109 |
+
Currently, MMBench contains 2974 instances in total, and is splitted into **dev** and **test** splits according to a 4:6 ratio.
|
110 |
+
|
111 |
+
## Additional Information
|
112 |
+
|
113 |
+
### Citation Information
|
114 |
+
|
115 |
+
```
|
116 |
+
@article{MMBench,
|
117 |
+
author = {Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhnag, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin},
|
118 |
+
journal = {arXiv:2307.06281},
|
119 |
+
title = {MMBench: Is Your Multi-modal Model an All-around Player?},
|
120 |
+
year = {2023},
|
121 |
+
}
|
122 |
+
```
|