File size: 6,083 Bytes
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
 
 
 
 
 
 
 
 
 
 
a39bdb0
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
a39bdb0
d1c4e6f
 
a39bdb0
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
- zh
tags:
- multimodal
- intelligence
size_categories:
- 1K<n<10K
license: apache-2.0
pretty_name: mmiq
---
# Dataset Card for "MM-IQ"

- [Dataset Description](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
- [Dataset Usage](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-usage)
  - [Data Downloading](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-downloading)
  - [Data Format](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-format)
  - [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
- [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)

## Dataset Description

**MM-IQ** is a new benchmark designed to evaluate MLLMs' intelligence through multiple reasoning patterns demanding abstract reasoning abilities. It encompasses **three input formats, six problem configurations, and eight reasoning patterns**. With **2,710 samples**, MM-IQ is the most comprehensive and largest AVR benchmark for evaluating the intelligence of MLLMs, and **3x and 10x** larger than two very recent benchmarks MARVEL and MathVista-IQTest, respectively. By focusing on AVR problems, MM-IQ provides a targeted assessment of the cognitive capabilities and intelligence of MLLMs, contributing to a more comprehensive understanding of their strengths and limitations in the pursuit of AGI.
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />



## Paper Information

- Paper: Coming soon.
- Code: https://github.com/AceCHQ/MMIQ/tree/main
- Project: https://acechq.github.io/MMIQ-benchmark/
- Leaderboard: https://acechq.github.io/MMIQ-benchmark/#leaderboard


## Dataset Examples

Examples of our MM-IQ:
1. Logical Operation Reasoning

<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/logical_AND_2664.png" style="zoom:100%;" />

<details>


<summary>🔍 Click to expand/collapse more examples</summary>

2. Mathematical Reasoning
<p>Prompt1: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/arithmetic_1133.png" style="zoom:120%;" />

3. 2D-geometry Reasoning
<p>Prompt: The option that best fits the given pattern of figures is ( ).</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/2D_sys_1036.png" style="zoom:40%;" />

4. 3D-geometry Reasoning
<p>Prompt: The one that matches the top view is:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/3D_view_1699.png" style="zoom:30%;" />

5. visual instruction Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/Visual_instruction_arrow_2440.png" style="zoom:50%;" />

6. Spatial Relationship Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/spatial_6160.png" style="zoom:120%;" />

7. Concrete Object Reasoning
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/concrete_object_6167.png" style="zoom:120%;" />

8. Temporal Movement Reasoning
<p>Prompt:Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/temporal_rotation_1379.png" style="zoom:50%;" />

</details>

## Leaderboard

🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard).


## Dataset Usage

### Data Downloading


You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)):

```python
from datasets import load_dataset

dataset = load_dataset("huanqia/MM-IQ")
```

Here are some examples of how to access the downloaded dataset:

```python
# print the first example on the MM-IQ dataset
print(dataset[0])
print(dataset[0]['data_id']) # print the problem id 
print(dataset[0]['question']) # print the question text 
print(dataset[0]['answer']) # print the answer
print(dataset[0]['image']) # print the image
```

### Data Format

The dataset is provided in json format and contains the following attributes:

```json
{
    "question": [string] The question text,
    "image": [string] The image content
    "answer": [string] The correct answer for the problem,
    "data_id": [int] The problem id
    "category": [string] The category of reasoning pattern
}
```

### Automatic Evaluation

🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/AceCHQ/MMIQ/tree/main/mmiq).


## Citation

If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX:
```
@misc{cai2025mm-iq,
  title = {MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models},
  author = {Huanqia, Cai and Yijun Yang and Winston Hu},
  month = {January},
  year = {2025}
}
```

## Contact
[Huanqia Cai](caihuanqia19@mails.ucas.ac.cn): caihuanqia19@mails.ucas.ac.cn