StarBottle commited on
Commit
aedc0e5
1 Parent(s): a7d9351

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ # MIBench
6
+
7
+ This dataset is from our EMNLP'24 (main conference) paper [MIBench: Evaluating Multimodal Large Language Models over Multiple Images](https://arxiv.org/abs/2407.15272)
8
+
9
+ ## Introduction
10
+
11
+ <div align="center">
12
+ <img src="overview.webp" alt="Overview" style="width: 500px; height: auto;">
13
+ </div>
14
+
15
+ **MIBench** covers 13 sub-tasks in three typical multi-image scenarios: Multi-Image Instruction, Multimodal Knowledge-Seeking and Multimodal In-Context Learning.
16
+
17
+ - **Multi-Image Instruction**: This scenario includes instructions for perception, comparison and reasoning across multiple input images. According to the semantic types of the instructions, it is divided into five sub-tasks: General Comparison, Subtle Difference, Visual Referring, Temporal Reasoning and Logical Reasoning.
18
+
19
+ - **Multimodal Knowledge-Seeking**: This scenario examines the ability of MLLMs to acquire relevant information from external knowledge, which is provided in an interleaved image-text format. Based on the forms of external knowledge, we categorize this scenario into four sub-tasks: Fine-grained Visual Recognition, Text-Rich Images VQA, Vision-linked Textual Knowledge and Text-linked Visual Knowledge.
20
+
21
+ - **Multimodal In-Context Learning**: In-context learning is another popular scenario, in which MLLMs respond to visual questions while being provided with a series of multimodal demonstrations. To evaluate the model’s MIC ability in a fine-grained manner, we categorize the MIC scenario into four distinct tasks: Close-ended VQA, Open-ended VQA, Hallucination and Demo-based Task Learning.
22
+
23
+ ## Examples
24
+
25
+ The following image shows the examples of the multi-image scenarios with a total of 13 sub-tasks. The correct answers are marked in blue.
26
+ ![](example.webp)
27
+
28
+ ## Data format
29
+ Below shows an example of the dataset format. The `<image>` in the `question` field indicates the location of the images. Note that to ensure better reproducibility, for the Multimodal In-Context Learning scenario, we store the context information of different shots in the `context` field.
30
+
31
+ ```
32
+ {
33
+ "id": "general_comparison_1",
34
+ "image": [
35
+ "image/general_comparison/test1-902-0-img0.png",
36
+ "image/general_comparison/test1-902-0-img1.png"
37
+ ],
38
+ "question": "Left image is <image>. Right image is <image>. Question: Is the subsequent sentence an accurate portrayal of the two images? One lemon is cut in half and has both halves facing outward.",
39
+ "options": [
40
+ "Yes",
41
+ "No"
42
+ ],
43
+ "answer": "Yes",
44
+ "task": "general_comparison",
45
+ "type": "multiple-choice",
46
+ "context": null
47
+ },
48
+ ```
49
+
50
+
51
+ ## Citation
52
+ If you find this dataset useful for your work, please consider citing our paper:
53
+ ```
54
+ @article{liu2024mibench,
55
+ title={Mibench: Evaluating multimodal large language models over multiple images},
56
+ author={Liu, Haowei and Zhang, Xi and Xu, Haiyang and Shi, Yaya and Jiang, Chaoya and Yan, Ming and Zhang, Ji and Huang, Fei and Yuan, Chunfeng and Li, Bing and others},
57
+ journal={arXiv preprint arXiv:2407.15272},
58
+ year={2024}
59
+ }
60
+ ```