Datasets:

Languages:
English
Size Categories:
n<1K
ArXiv:
Tags:
License:
roufaen commited on
Commit
42c5ba0
โ€ข
1 Parent(s): bfd3317

update data

Browse files
Files changed (3) hide show
  1. README.md +56 -3
  2. data.json +0 -0
  3. images.rar +3 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ pretty_name: CODIS
8
+ size_categories:
9
+ - n<1K
10
+ ---
11
+ # CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models
12
+
13
+ [**๐ŸŒ Homepage**](https://thunlp-mt.github.io/CODIS) | [**๐Ÿ“– arXiv**](https://arxiv.org/abs/2402.13607) | [**Github**](https://github.com/THUNLP-MT/CODIS)
14
+
15
+ Dataset for paper [CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models](https://arxiv.org/abs/2402.13607).
16
+
17
+ ## Introduction
18
+
19
+ In certain situations, images need to be interpreted within a broader context. We introduce a new benchmark, named as **CODIS** (**CO**ntext-**D**ependent **I**mage di**S**ambiguation), designed to assess the ability of models to use context provided in free-form text to enhance visual comprehension.
20
+
21
+ - Each image in CODIS contains inherent ambiguity that can only be resolved with additional context.
22
+ - The questions are deliberately designed to highlight these ambiguities, requiring external context for accurate interpretation.
23
+ - For every image-question pair, we provide two contexts in a free-form text format.
24
+
25
+ ## Leaderboard
26
+
27
+ We report results of human and MLLMs. Models score only if their answers to a pair of queries are both correct. The results are based on human evaluation.
28
+
29
+ | Model | Loc & Ori | Temporal | Cultural | Attributes | Relationships | Average |
30
+ |----------------|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
31
+ Human | 85.2 | 90.9 | 72.8 | 87.2 | 89.6 | 86.2 |
32
+ GPT4-V | 33.3 | 28.4 | 25.5 | 26.7 | 51.9 | 32.3 |
33
+ Gemini | 21.4 | 29.5 | 21.3 | 24.0 | 34.6 | 26.1 |
34
+ LLaVA-1.5-13B | 6.0 | 4.2 | 10.6 | 14.7 | 13.5 | 9.1 |
35
+ BLIP-2-11B | 6.0 | 8.4 | 4.3 | 6.7 | 11.5 | 7.4 |
36
+ InstructBLIP-13B | 6.0 | 2.1 | 4.3 | 4.0 | 7.7 | 4.5 |
37
+ mPLUG-Owl-2-7B | 13.1 | 9.5 | 6.4 | 12.0 | 19.2 | 11.9 |
38
+ MiniGPT4-7B | 10.7 | 3.2 | 0.0 | 12.0 | 13.5 | 7.9 |
39
+ LLaVA-1.5-7B | 11.9 | 5.3 | 4.3 | 9.3 | 7.7 | 7.9 |
40
+ InstructBLIP-7B | 1.2 | 7.4 | 0.0 | 4.0 | 11.5 | 4.8 |
41
+ Otter-7B | 2.4 | 5.3 | 4.3 | 0.0 | 5.8 | 3.4 |
42
+ LLaVA-7B | 2.4 | 6.3 | 0.0 | 1.3 | 5.8 | 3.4 |
43
+ Qwen-VL-Chat | 3.6 | 3.2 | 0.0 | 1.3 | 9.6 | 3.4 |
44
+ OpenFlamingo-7B | 2.4 | 2.1 | 0.0 | 5.3 | 5.8 | 3.1 |
45
+ BLIP-2-6.7B | 0.0 | 1.1 | 2.1 | 2.7 | 7.7 | 2.3 |
46
+
47
+ ## Citation
48
+
49
+ ```bibtex
50
+ @article{luo2024codis,
51
+ title={CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models},
52
+ author={Fuwen Luo and Chi Chen and Zihao Wan and Zhaolu Kang and Qidong Yan and Yingjie Li and Xiaolong Wang and Siyu Wang and Ziyue Wang and Xiaoyue Mi and Peng Li and Ning Ma and Maosong Sun and Yang Liu},
53
+ journal={arXiv preprint arXiv:2402.13607},
54
+ year={2024}
55
+ }
56
+ ```
data.json ADDED
The diff for this file is too large to render. See raw diff
 
images.rar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40fbec0e83d824cdc6bf39ca00c7e68afd43c5e5622dbb654c713fe1c0e33244
3
+ size 75069560