MIBench / README.md
StarBottle's picture
Update README.md
aedc0e5 verified
metadata
license: cc-by-4.0

MIBench

This dataset is from our EMNLP'24 (main conference) paper MIBench: Evaluating Multimodal Large Language Models over Multiple Images

Introduction

Overview

MIBench covers 13 sub-tasks in three typical multi-image scenarios: Multi-Image Instruction, Multimodal Knowledge-Seeking and Multimodal In-Context Learning.

  • Multi-Image Instruction: This scenario includes instructions for perception, comparison and reasoning across multiple input images. According to the semantic types of the instructions, it is divided into five sub-tasks: General Comparison, Subtle Difference, Visual Referring, Temporal Reasoning and Logical Reasoning.

  • Multimodal Knowledge-Seeking: This scenario examines the ability of MLLMs to acquire relevant information from external knowledge, which is provided in an interleaved image-text format. Based on the forms of external knowledge, we categorize this scenario into four sub-tasks: Fine-grained Visual Recognition, Text-Rich Images VQA, Vision-linked Textual Knowledge and Text-linked Visual Knowledge.

  • Multimodal In-Context Learning: In-context learning is another popular scenario, in which MLLMs respond to visual questions while being provided with a series of multimodal demonstrations. To evaluate the model’s MIC ability in a fine-grained manner, we categorize the MIC scenario into four distinct tasks: Close-ended VQA, Open-ended VQA, Hallucination and Demo-based Task Learning.

Examples

The following image shows the examples of the multi-image scenarios with a total of 13 sub-tasks. The correct answers are marked in blue.

Data format

Below shows an example of the dataset format. The <image> in the question field indicates the location of the images. Note that to ensure better reproducibility, for the Multimodal In-Context Learning scenario, we store the context information of different shots in the context field.

{
    "id": "general_comparison_1",
    "image": [
        "image/general_comparison/test1-902-0-img0.png",
        "image/general_comparison/test1-902-0-img1.png"
    ],
    "question": "Left image is <image>. Right image is <image>. Question: Is the subsequent sentence an accurate portrayal of the two images? One lemon is cut in half and has both halves facing outward.",
    "options": [
        "Yes",
        "No"
    ],
    "answer": "Yes",
    "task": "general_comparison",
    "type": "multiple-choice",
    "context": null
},

Citation

If you find this dataset useful for your work, please consider citing our paper:

@article{liu2024mibench,
  title={Mibench: Evaluating multimodal large language models over multiple images},
  author={Liu, Haowei and Zhang, Xi and Xu, Haiyang and Shi, Yaya and Jiang, Chaoya and Yan, Ming and Zhang, Ji and Huang, Fei and Yuan, Chunfeng and Li, Bing and others},
  journal={arXiv preprint arXiv:2407.15272},
  year={2024}
}