File size: 4,444 Bytes
9795128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c35f0c9
9795128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for MathVerse

- [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#citation)

## Dataset Description
The capabilities of **Multi-modal Large Language Models (MLLMs)** in **visual math problem-solving** remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams.

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig1.png" width="90%"> <br>
</p>

To this end, we introduce **MathVerse**, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into **six distinct versions**, each offering varying degrees of information content in multi-modality, contributing to **15K** test samples in total. This approach allows MathVerse to comprehensively assess ***whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning.*** 

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig2.png" width="90%"> <br>
    Six different versions of each problem in <b>MathVerse</b> transformed by expert annotators.
</p>

In addition, we propose a **Chain-of-Thought (CoT) Evaluation strategy** for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig3.png" width="90%"> <br>
    The two phases of the CoT evaluation strategy.
</p>

## Paper Information
- Code: https://github.com/ZrrSkywalker/MathVerse
- Project: https://mathverse-cuhk.github.io/
- Visualization: https://mathverse-cuhk.github.io/#visualization
- Leaderboard: https://mathverse-cuhk.github.io/#leaderboard
- Paper: https://arxiv.org/abs/2403.14624

## Dataset Examples
πŸ–± Click to expand the examples for six problems versions within three subjects</summary>

<details>
<summary>πŸ” Plane Geometry</summary>

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver1.png" width="50%"> <br>
</p>
</details>

<details>
<summary>πŸ” Solid Geometry</summary>

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver2.png" width="50%"> <br>
</p>
</details>

<details>
<summary>πŸ” Functions</summary>

<p align="center">
    <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver3.png" width="50%"> <br>
</p>
</details>

## Leaderboard
### Contributing to the Leaderboard

🚨 The [Leaderboard](https://mathverse-cuhk.github.io/#leaderboard) is continuously being updated. 

The evaluation instructions and tools will be released soon. For now, please send your results on the ***testmini*** set to this email: 1700012927@pku.edu.cn. Please refer to the following template to prepare your result json file.

- [output_testmini_template.json]()

## Citation

If you find **MathVerse** useful for your research and applications, please kindly cite using this BibTeX:

```latex
@inproceedings{zhang2024mathverse,
  title={MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?},
  author={Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li},
  booktitle={arXiv},
  year={2024}
}
```