Datasets:
BAAI
/

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
Libraries:
Datasets
Dask
License:
CMMU / README.md
philokey's picture
Update README.md
48dfb47 verified
|
raw
history blame
3.94 kB
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
language:
  - zh
pretty_name: CMMU
size_categories:
  - 1K<n<10K

CMMU

📖 Paper | 🤗 Dataset | GitHub

This repo contains the evaluation code for the paper CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning .

Introduction

CMMU is a novel multi-modal benchmark designed to evaluate domain-specific knowledge across seven foundational subjects: math, biology, physics, chemistry, geography, politics, and history. It comprises 3603 questions, incorporating text and images, drawn from a range of Chinese exams. Spanning primary to high school levels, CMMU offers a thorough evaluation of model capabilities across different educational stages.

Evaluation Results

We currently evaluated 10 models on CMMU. The results are shown in the following table.

Model Val Avg. Test Avg.
InstructBLIP-13b 0.39 0.48
CogVLM-7b 5.55 4.9
ShareGPT4V-7b 7.95 7.63
mPLUG-Owl2-7b 8.69 8.58
LLava-1.5-13b 11.36 11.96
Qwen-VL-Chat-7b 11.71 12.14
Intern-XComposer-7b 18.65 19.07
Gemini-Pro 21.58 22.5
Qwen-VL-Plus 26.77 26.9
GPT-4V 30.19 30.91

How to use

Load dataset

from eval.cmmu_dataset import CmmuDataset
# CmmuDataset will load *.jsonl files in data_root
dataset = CmmuDataset(data_root=your_path_to_cmmu_dataset)

About fill-in-the-blank questions

For fill-in-the-blank questions, CmmuDataset will generate new questions by sub_question, for example:

The original question is:

{
    "type": "fill-in-the-blank",
    "question_info": "question", 
    "id": "subject_1234", 
    "sub_questions": ["sub_question_0", "sub_question_1"],
    "answer": ["answer_0", "answer_1"]
}

Converted questions are:

[
{
    "type": "fill-in-the-blank",
    "question_info": "question" + "sub_question_0", 
    "id": "subject_1234-0",
    "answer": "answer_0"
},
{
    "type": "fill-in-the-blank",
    "question_info": "question" + "sub_question_1", 
    "id": "subject_1234-1",
    "answer": "answer_1"
}
]

About ShiftCheck The parameter shift_check is True by default, you can get more information about shift_check in our technical report.

CmmuDataset will generate k new questions by shift_check, their ids are {original_id}-k.

Evaluate

The output format should be a list of json dictionaries, the required key is as follows:

{
    "question_id": "question id",
    "answer": "answer"
}

Current code call gpt4 API by AzureOpenAI, maybe you need to modify eval/chat_llm.py to create your own client, and before run evaluation, you need to set environment variables like AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT.

Run

python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset

NOTE We evaluate fill-in-the-blank questions using GPT-4 by default. If you do not have access to GPT-4, you can attempt to use a rule-based method to fill in the blanks. However, be aware that the results might differ from the official ones.

python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset --gpt none

To evaluate specific type of questions, you can use --qtype parameter, for example:

python eval/evaluate.py --result example/gpt4v_results_val.json --data_root your_path_to_cmmu_dataset --qtype fbq mrq

Citation