rayguan commited on
Commit
4989b16
β€’
1 Parent(s): 5ef1a2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -2
README.md CHANGED
@@ -1,6 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: bsd-3-clause
3
  ---
4
 
5
-
6
- HallusionBench: arxiv.org/abs/2310.14566
 
1
+ # HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination & Visual Illusion in Large Vision-Language Models
2
+
3
+ You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
4
+
5
+ [Tianrui Guan*](https://tianruiguan.phd), [Fuxiao Liu*](https://fuxiaoliu.github.io/), Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou
6
+
7
+ πŸ”₯πŸ”₯πŸ”₯
8
+ ## We welcome everyone to contribute the failure cases of Large Multimodal Models (GPT-4V) to our community!
9
+ πŸ”₯πŸ”₯πŸ”₯
10
+
11
+ Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs. To study these two types of VLM mistakes, i.e., language hallucination and visual illusion, we curated HallusionBench, an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in HallusionBench, which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future.
12
+
13
+ If you find our paper useful, please cite our paper:
14
+ ```bibtex
15
+ @misc{guan2023hallusionbench,
16
+ title={HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination & Visual Illusion in Large Vision-Language Models},
17
+ author={Tianrui Guan and Fuxiao Liu and Xiyang Wu and Ruiqi Xian and Zongxia Li and Xiaoyu Liu and Xijun Wang and Lichang Chen and Furong Huang and Yaser Yacoob and Dinesh Manocha and Tianyi Zhou},
18
+ year={2023},
19
+ eprint={2310.14566},
20
+ archivePrefix={arXiv},
21
+ primaryClass={cs.CV}
22
+ }
23
+ @misc{liu2023mitigating,
24
+ title={Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning},
25
+ author={Fuxiao Liu and Kevin Lin and Linjie Li and Jianfeng Wang and Yaser Yacoob and Lijuan Wang},
26
+ year={2023},
27
+ eprint={2306.14565},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CV}
30
+ }
31
+ ```
32
+
33
+ ## Updates
34
+ - [11/28] πŸ”₯ The full paper is uploaded and can be accessed [here](https://arxiv.org/abs/2310.14566). The dataset is expanded and leaderboard is updated.
35
+ - [11/13] πŸ”₯ Evaluation result on LLaVA-1.5 is updated. More model results to come!
36
+ - [10/27] πŸ”₯ The [leaderboard](https://paperswithcode.com/sota/visual-question-answering-vqa-on-3) and evaluation code is released! **Welcome to update your model on our leaderboard!**
37
+ - [10/24] πŸ”₯ The early report with case analysis and insights is available [here](https://arxiv.org/abs/2310.14566).
38
+ - [10/23] πŸ”₯ Please check our previous work on mitigating hallucinations of LMMs ["Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning"](https://github.com/FuxiaoLiu/LRV-Instruction).
39
+
40
+ ## Dataset Download
41
+
42
+ To keep evaluation simple, we only provide the question in form of yes/no questions.
43
+
44
+ | Updated on | Questions and Annotations | Figures | Question Count | Figure Count |
45
+ | ----------- | :----: | :----: | :----: | :----: |
46
+ | Oct 27, 2023 | [HallusionBench.json](./HallusionBench.json) | [hallusion_bench.zip](https://drive.google.com/file/d/1eeO1i0G9BSZTE1yd5XeFwmrbe1hwyf_0/view?usp=sharing) | 254 | 69 |
47
+
48
+ ### Evaluation
49
+
50
+ 1. Clone the repo.
51
+ ```
52
+ git clone https://github.com/tianyi-lab/HallusionBench.git
53
+ cd ./HallusionBench
54
+ ```
55
+
56
+ 2. Download the images [hallusion_bench.zip](https://drive.google.com/file/d/1eeO1i0G9BSZTE1yd5XeFwmrbe1hwyf_0/view?usp=sharing) and unzip the folder in the same directory.
57
+
58
+ 3. The questions and image locations are saved in `./HallusionBench.json`. The data sample are as follows:
59
+ ```
60
+ {'category': 'VD', 'subcategory': 'illusion', 'visual_input': '1', 'set_id': '0', 'figure_id': '0', 'sample_note': 'circle', 'question_id': '0', 'question': 'Is the right orange circle the same size as the left orange circle?', 'gt_answer_details': 'The right orange circle is the same size as the left orange circle.', 'gt_answer': '1', 'filename': './hallusion_bench/VD/illusion/0_0.png'}
61
+ ```
62
+ The key `visual_input`means whether the question needs visual input like images. If `visual_input=1`, it means the question need visual input. If `visual_input=0`, it means the question doesn't need visual input. It's the text-only question.
63
+
64
+ 4. Run your model on `./HallusionBench.json` and save the ouput file as `./HallusionBench_result.json`. You need to add the output of your model in the key `'model_prediction'`. We provide an sample result [here](./HallusionBench_result_sample.json).
65
+ 5. Finally, run the following code for evaluation:
66
+ ```
67
+ python evaluation.py
68
+ ```
69
+
70
+ You can use your own API key for GPT4 evaluation by editing the code [here](./utils.py#L10).
71
+
72
+
73
+
74
+ ## Leaderboard
75
+
76
+
77
+ ### Definition
78
+ * **Visual Dependent (VD) Questions**: questions that do not have an affirmative answer without the visual context.
79
+ * **Easy**: Original images that are obtained from Internet.
80
+ * **Hard**: Edited images from the original images.
81
+ * **Visual Supplement (VS) Questions**: questions that can be answered without the visual input; the visual component merely provides supplemental information.
82
+ * **Easy**: No visual input. Uncertain answer without hallucination is also considered correct response.
83
+ * **Hard**: With visual input. The answer must follow the provided figure and visual context.
84
+
85
+ ### Metric
86
+
87
+
88
+ * **Accuracy per Figure (Consistency Test)**: Accuracy based on each figure. To make sure the mode truly understand image, we ask variant of questions based on the same knowledge on the same figure, and consider it correct if the model can answer all questions correctly. For example, the model should not give inconsistent responses on the questions "Is A bigger than B?" and "Is B smaller A?".
89
+ * **Accuracy per Question**: Accuracy of all questions, including easy and hard questions.
90
+ * **Accuracy per Question Pair**: We ask the same questions on similar images (or, with and without images). We consider the same question text on different visual contexts a **question pair** (usually they come in with an *easy* question and a corresponding *hard* question). This metric calculate accuracy of all question pairs.
91
+
92
+ | Model | Question Pair Acc | Figure Acc | Easy Question Acc | Hard Question Acc | Question Acc | Json |
93
+ | ----- | :----: | :----: | :----: | :----: | :----: | :----: |
94
+ | **GPT4V** <br />Sep 25, 2023 Version <br />(Human Eval) | 31.42 | 44.22 | 79.56 | 38.37 | 67.58 | [VD](), [VS]() |
95
+ | **GPT4V** <br />Sep 25, 2023 Version <br />(GPT Eval) | 28.79 | 39.88 | 75.60 | 37.67 | 65.28 | [VD](), [VS]() |
96
+ | **LLaVA-1.5** <br />(Human Eval) | 9.45 | 25.43 | 50.77 | 29.07 | 47.12 | [VD](), [VS]() |
97
+ | **LLaVA-1.5** <br />(GPT Eval) | 10.55 | 24.86 | 49.67 | 29.77 | 46.94 | [VD](), [VS]() |
98
+ | **BLIP2-T5** <br />(GPT Eval) | 15.16 | 20.52 | 45.49 | 43.49 | 48.09 | [VD](), [VS]() |
99
+ | **InstructBLIP** <br />(GPT Eval) | 9.45 | 10.11 | 35.60 | 45.12 | 45.26 | [VD](), [VS]() |
100
+ | **Qwen-VL** <br />(GPT Eval) | 5.93 | 6.65 | 31.43 | 24.88 | 39.15 | [VD](), [VS]() |
101
+ | **Open-Flamingo** <br />(GPT Eval) | 6.37 | 11.27 | 39.56 | 27.21 | 38.44 | [VD](), [VS]() |
102
+ | **MiniGPT5** <br />(GPT Eval) |10.55 | 9.83 | 36.04| 28.37 | 40.30 | [VD](), [VS]() |
103
+ | **MiniGPT4** <br />(GPT Eval) |8.79 | 10.12 | 31.87| 27.67 | 35.78 | [VD](), [VS]() |
104
+ | **mPLUG_Owl-v2** <br />(GPT Eval) |13.85 | 19.94 | 44.84| 39.07 | 47.30 | [VD](), [VS]() |
105
+ | **mPLUG_Owl-v1** <br />(GPT Eval) |9.45 | 10.40 | 39.34| 29.77 | 43.93 | [VD](), [VS]() |
106
+ | **GiT** <br />(GPT Eval) |5.27 | 6.36 | 26.81| 31.86 | 34.37 | [VD](), [VS]() |
107
+
108
+
109
+
110
+ ### Reproduce GPT4V results on leaderboard
111
+
112
+ 1. We saved the ouput of GPT4V with our annotation. Put `HallusionBench.tsv` in the root directory of this repo, or set `input_file_name` in [gpt4v_benchmark.py](./gpt4v_benchmark.py) to the location of the [HallusionBench.tsv](https://drive.google.com/file/d/1q8db7-7IlA4WLZ_5Jt-TpLDyAWg8Ybx4/view?usp=sharing) file.
113
+
114
+ 2. (Optional) If you don't have access to GPT API, you don't need to run it since we have saved evaluation results. They can be downloaded for [Visual Dependent]() and [Visual Supplement](). Put the json files in the root directory of this repo, or set `save_json_path_vd` and `save_json_path_vd` in [gpt4v_benchmark.py](./gpt4v_benchmark.py) to their respective locations.
115
+
116
+ 3. Run `python gpt4v_benchmark.py`.
117
+
118
+ ## Examples and Analysis
119
+ <p align="center" >
120
+ <img src="./examples/f-01.png" alt="Example 1" class="center" width="800"/>
121
+ <img src="./examples/f-02.png" alt="Example 2" class="center" width="800"/>
122
+ <img src="./examples/f-04.png" alt="Example 3" class="center" width="800"/>
123
+ <img src="./examples/f-05.png" alt="Example 4" class="center" width="800"/>
124
+ <img src="./examples/f-08.png" alt="Example 5" class="center" width="800"/>
125
+ <img src="./examples/f-15.png" alt="Example 6" class="center" width="800"/>
126
+ <img src="./examples/f-10.png" alt="Example 7" class="center" width="800"/>
127
+ <img src="./examples/f-12.png" alt="Example 8" class="center" width="800"/>
128
+ <img src="./examples/f-17.png" alt="Example 9" class="center" width="800"/>
129
+ </p>
130
+
131
+
132
+
133
  ---
134
  license: bsd-3-clause
135
  ---
136