Datasets:
BAAI
/

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 2,915 Bytes
48dfb47
 
 
 
 
 
 
 
 
21a65cb
d92084f
21a65cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d92084f
 
21a65cb
 
2cd7a97
 
 
1f59985
48dfb47
88e9191
b3b6556
88e9191
b3b6556
 
 
88e9191
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e436fb
 
 
 
 
 
b3b6556
4e436fb
 
88e9191
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- zh
pretty_name: CMMU
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: type
    dtype: string
  - name: grade_band
    dtype: string
  - name: difficulty
    dtype: string
  - name: question_info
    dtype: string
  - name: split
    dtype: string
  - name: subject
    dtype: string
  - name: image
    dtype: string
  - name: sub_questions
    sequence: string
  - name: options
    sequence: string
  - name: answer
    sequence: string
  - name: solution_info
    dtype: string
  - name: id
    dtype: string
  - name: image
    dtype: image
configs:
- config_name: default
  data_files:
  - split: val
    path:
    - "val/*.parquet"
---
# CMMU
[**📖 Paper**](https://arxiv.org/abs/2401.14011) | [**🤗 Dataset**](https://huggingface.co/datasets) | [**GitHub**](https://github.com/FlagOpen/CMMU)

This repo contains the evaluation code for the paper [**CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning**](https://arxiv.org/abs/2401.14011) .

We release the validation set of CMMU, you can download it from [here](https://huggingface.co/datasets/BAAI/CMMU). The test set will be hosted on the [flageval platform](https://flageval.baai.ac.cn/). Users can test by uploading their models.

## Introduction
CMMU is a novel multi-modal benchmark designed to evaluate domain-specific knowledge across seven foundational subjects: math, biology, physics, chemistry, geography, politics, and history. It comprises 3603 questions, incorporating text and images, drawn from a range of Chinese exams. Spanning primary to high school levels, CMMU offers a thorough evaluation of model capabilities across different educational stages.
![](assets/example.png)  

## Evaluation Results
We currently evaluated 10 models on CMMU. The results are shown in the following table.

| Model                      | Val Avg. | Test Avg. |
|----------------------------|----------|-----------|
| InstructBLIP-13b           | 0.39     | 0.48      |
| CogVLM-7b                  | 5.55     | 4.9       |
| ShareGPT4V-7b              | 7.95     | 7.63      |
| mPLUG-Owl2-7b              | 8.69     | 8.58      |
| LLava-1.5-13b              | 11.36    | 11.96     |
| Qwen-VL-Chat-7b            | 11.71    | 12.14     |
| Intern-XComposer-7b        | 18.65    | 19.07     |
| Gemini-Pro                 | 21.58    | 22.5      |
| Qwen-VL-Plus               | 26.77    | 26.9      |
| GPT-4V                     | 30.19    | 30.91     |


## Citation
**BibTeX:**
```bibtex
@article{he2024cmmu,
        title={CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning},
        author={Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu and Hua Huang},
		journal={arXiv preprint arXiv:2401.14011},        
        year={2024},
      }
```