Add 1 files
Browse files- 2507/2507.07151.md +543 -0
2507/2507.07151.md
ADDED
|
@@ -0,0 +1,543 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Robust Multimodal Large Language Models Against Modality Conflict
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2507.07151
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
###### Abstract
|
| 7 |
+
|
| 8 |
+
Despite the impressive capabilities of multimodal large language models (MLLMs) in vision-language tasks, they are prone to hallucinations in real-world scenarios. This paper investigates the hallucination phenomenon in MLLMs from the perspective of modality conflict. Unlike existing works focusing on the conflicts between model responses and inputs, we study the inherent conflicts in inputs from different modalities that place MLLMs in a dilemma and directly lead to hallucinations. We formally define the modality conflict and construct a dataset named Multimodal Modality Conflict (MMMC) to simulate this phenomenon in vision-language tasks. Three methods based on prompt engineering, supervised fine-tuning, and reinforcement learning are proposed to alleviate the hallucination caused by modality conflict. Extensive experiments are conducted on the MMMC dataset to analyze the merits and demerits of these methods. Our results show that the reinforcement learning method achieves the best performance in mitigating the hallucination under modality conflict, while the supervised fine-tuning method shows promising and stable performance. Our work sheds light on the unnoticed modality conflict that leads to hallucinations and provides more insights into the robustness of MLLMs. The code and dataset are available at [https://github.com/zmzhang2000/MMMC](https://github.com/zmzhang2000/MMMC).
|
| 9 |
+
|
| 10 |
+
Multimodal Large Language Models, modality conflict, Hallucinations, Reinforcement Learning, Robustness
|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
|
| 14 |
+
Figure 1: An example of modality conflict in vision-language tasks. Given an image describing a dog surfing on the sea, the user may ask the question “What color is the ball?”. The model may hallucinate a response “The ball in the image is green”, while there is no ball in the image. We expect the model to recognize the conflict between the visual input and the textual input and give a response like “The image does not contain a ball”.
|
| 15 |
+
|
| 16 |
+
1 Introduction
|
| 17 |
+
--------------
|
| 18 |
+
|
| 19 |
+
The recent success of multimodal large language models (MLLMs) has advanced the development of artificial intelligence in vision-language tasks(Dai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib5); Liu et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib19), [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20); Bai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib2); Wang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib30)). These models enable the joint reasoning over visual and textual inputs, and have achieved state-of-the-art performance in various vision-language tasks that require multimodal reasoning(Fu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib6); Yue et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib37); Yu et al., [2024c](https://arxiv.org/html/2507.07151v1#bib.bib36); Lu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib25); Liu et al., [2025](https://arxiv.org/html/2507.07151v1#bib.bib23)). The powerful capabilities of these MLLMs are typically achieved by pretraining separate language and vision models on large-scale datasets, and then aligning their features to enable multimodal reasoning(Liu et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib19), [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20)).
|
| 20 |
+
|
| 21 |
+
Despite the impressive performance of MLLMs, they are prone to hallucinations in real-world scenarios(Huang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib11); Yu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib35)). Hallucinations refer to the phenomenon where MLLMs generate incorrect or misleading information not supported by the input data(Ji et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib12)). Existing works have proposed various methods to alleviate hallucinations in MLLMs, such as improving the quality of training data(Liu et al., [2024a](https://arxiv.org/html/2507.07151v1#bib.bib18); Yu et al., [2024a](https://arxiv.org/html/2507.07151v1#bib.bib34)), adjusting the decoding strategies(Leng et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib15); Huang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib11)), and align the model with human preference(Zhao et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib39); Yu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib35)). These methods mainly target more precise alignment between the features of different modalities to reduce hallucinations.
|
| 22 |
+
|
| 23 |
+
However, existing works on alleviating hallucinations in MLLMs mainly focus on the conflicts between the model responses and the inputs, neglecting a possible source of hallucinations: the conflicts between the inputs from different modalities, which we call modality conflict. For instance, as shown in[Figure 1](https://arxiv.org/html/2507.07151v1#S0.F1 "In Robust Multimodal Large Language Models Against Modality Conflict"), given an image describing a dog surfing on the sea, the user may ask the question “What color is the ball?”. In this case, the question supposes a ball exists in the image, and the model may hallucinate a response “The ball in the image is green”, while there is no ball in the image. We expect the model to recognize the conflict between the visual input and the textual input and give a response like “The image does not contain a ball”. Even with the capability of perfectly aligning features of different modalities, MLLMs may still fall into a dilemma when facing such intrinsically conflicted information between inputs. To this end, we aim to investigate such hallucination phenomenon in MLLMs from the perspective of modality conflict.
|
| 24 |
+
|
| 25 |
+
In this paper, we first give a formal definition of modality conflict in vision-language tasks in terms of objects, attributes, and relationships in the visual and textual inputs. Based on the definition, we construct a dataset named MultiModal Modality Conflict (MMMC) to simulate the modality conflict in vision-language tasks. We evaluate various prevalent MLLMs(Dai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib5); Liu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20); Bai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib2); Wang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib30)) on the MMMC dataset and find that most of them lack the ability to recognize the modality conflict and are prone to hallucinations.
|
| 26 |
+
|
| 27 |
+
To alleviate the hallucination caused by the modality conflict and work towards more robust MLLMs, we investigate the effectiveness of three methods: prompt engineering, supervised fine-tuning, and reinforcement learning. We conduct extensive experiments on the MMMC dataset to analyze the merits and demerits of these methods. Our results show that the reinforcement learning method achieves the best performance in mitigating the hallucination under modality conflict, while the supervised fine-tuning method shows promising and stable performance. Our work sheds light on the unnoticed modality conflict that causes hallucinations and provides more insights into the robustness of MLLMs.
|
| 28 |
+
|
| 29 |
+
To summarize, the contributions of this paper are as follows:
|
| 30 |
+
|
| 31 |
+
* •
|
| 32 |
+
This paper reveals an unnoticed source of hallucinations in MLLMs: modality conflict. The formal definition of modality conflict is presented in the level of objects, attributes, and relationships.
|
| 33 |
+
|
| 34 |
+
* •
|
| 35 |
+
We construct a dataset called Multimodal Modality Conflict (MMMC) to simulate the modality conflict in vision-language tasks and evaluate various prevalent MLLMs on the dataset. Results show that most MLLMs are prone to hallucinations under modality conflict.
|
| 36 |
+
|
| 37 |
+
* •
|
| 38 |
+
We propose three methods, prompt engineering, supervised fine-tuning, and reinforcement learning, to alleviate the hallucination caused by the modality conflict. Extensive experiments are conducted to analyze the merits and demerits of these methods.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+
Figure 2: The pipeline of the data construction process and proposed methods. The data construction process mainly consists of key components detection, components substitution, and answer generation. Prompt engineering, supervised fine-tuning, and reinforcement learning are proposed to alleviate the hallucination caused by the modality conflict. The snowflake icon denotes that the MLLM is frozen, while the flame icon indicates it is fine-tuned.
|
| 43 |
+
|
| 44 |
+
2 Problem Formulation
|
| 45 |
+
---------------------
|
| 46 |
+
|
| 47 |
+
In this section, we formally define modality conflict in vision-language tasks and detail the data construction process of MMMC. The pipeline of the data construction process and proposed methods are illustrated in[Figure 2](https://arxiv.org/html/2507.07151v1#S1.F2 "In 1 Introduction ‣ Robust Multimodal Large Language Models Against Modality Conflict").
|
| 48 |
+
|
| 49 |
+
### 2.1 Modality Conflict
|
| 50 |
+
|
| 51 |
+
##### General Form
|
| 52 |
+
|
| 53 |
+
Given a vision-language task consisting of a visual input 𝒱 𝒱\mathcal{V}caligraphic_V and a textual input 𝒯 𝒯\mathcal{T}caligraphic_T, the task is to predict an answer 𝒜 𝒜\mathcal{A}caligraphic_A. We define the modality conflict as the situation where the information contained in 𝒱 𝒱\mathcal{V}caligraphic_V and 𝒯 𝒯\mathcal{T}caligraphic_T is inconsistent with each other, leading to a dilemma for the model to predict the answer 𝒜 𝒜\mathcal{A}caligraphic_A. We define the general form of modality conflict as
|
| 54 |
+
|
| 55 |
+
Info(𝒱)≠Info(𝒯).Info 𝒱 Info 𝒯\text{Info}(\mathcal{V})\neq\text{Info}(\mathcal{T}).Info ( caligraphic_V ) ≠ Info ( caligraphic_T ) .(1)
|
| 56 |
+
|
| 57 |
+
Concretely, we instantiate the Info(⋅)Info⋅\text{Info}(\cdot)Info ( ⋅ ) function from objects, attributes, and relationships in the visual and textual inputs following Shu et al. ([2025](https://arxiv.org/html/2507.07151v1#bib.bib28)). We define these three types of modality conflict as follows.
|
| 58 |
+
|
| 59 |
+
##### Object Conflict
|
| 60 |
+
|
| 61 |
+
The object conflict occurs when the textual input involves objects not present in the visual input. For example, the textual input supposes a cat in the image, while the image only contains a dog rather than a cat. We define the object conflict in ⟨𝒱,𝒯⟩𝒱 𝒯\langle\mathcal{V},\mathcal{T}\rangle⟨ caligraphic_V , caligraphic_T ⟩ as
|
| 62 |
+
|
| 63 |
+
Obj(𝒯)⊈Obj(𝒱),not-subset-of-or-equals Obj 𝒯 Obj 𝒱\text{Obj}(\mathcal{T})\not\subseteq\text{Obj}(\mathcal{V}),Obj ( caligraphic_T ) ⊈ Obj ( caligraphic_V ) ,(2)
|
| 64 |
+
|
| 65 |
+
where Obj(⋅)Obj⋅\text{Obj}(\cdot)Obj ( ⋅ ) denotes the set of objects in the input.
|
| 66 |
+
|
| 67 |
+
##### Attribute Conflict
|
| 68 |
+
|
| 69 |
+
Sometimes the visual and textual inputs may describe the same objects but with different attributes. For example, the textual input describes a red apple, while the image shows a green apple. We deem attribute conflict arises in ⟨𝒱,𝒯⟩𝒱 𝒯\langle\mathcal{V},\mathcal{T}\rangle⟨ caligraphic_V , caligraphic_T ⟩ if
|
| 70 |
+
|
| 71 |
+
{Obj(𝒯)⊆Obj(𝒱){𝒪 i}i=1 m=Obj(𝒯)∩Obj(𝒱)Attr(𝒪 i 𝒯)≠Attr(𝒪 i 𝒱),i=1,2,…,m,\left\{\begin{aligned} \text{Obj}(\mathcal{T})&\subseteq\text{Obj}(\mathcal{V}% )\\ \{\mathcal{O}_{i}\}_{i=1}^{m}&=\text{Obj}(\mathcal{T})\cap\text{Obj}(\mathcal{% V})\\ \text{Attr}(\mathcal{O}_{i}^{\mathcal{T}})&\neq\text{Attr}(\mathcal{O}_{i}^{% \mathcal{V}}),i=1,2,...,m\end{aligned}\right.,{ start_ROW start_CELL Obj ( caligraphic_T ) end_CELL start_CELL ⊆ Obj ( caligraphic_V ) end_CELL end_ROW start_ROW start_CELL { caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT end_CELL start_CELL = Obj ( caligraphic_T ) ∩ Obj ( caligraphic_V ) end_CELL end_ROW start_ROW start_CELL Attr ( caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_T end_POSTSUPERSCRIPT ) end_CELL start_CELL ≠ Attr ( caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_V end_POSTSUPERSCRIPT ) , italic_i = 1 , 2 , … , italic_m end_CELL end_ROW ,(3)
|
| 72 |
+
|
| 73 |
+
where {𝒪 i}i=1 m superscript subscript subscript 𝒪 𝑖 𝑖 1 𝑚\{\mathcal{O}_{i}\}_{i=1}^{m}{ caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT is the set of objects contained in both the image and text inputs. 𝒪 i 𝒱 superscript subscript 𝒪 𝑖 𝒱\mathcal{O}_{i}^{\mathcal{V}}caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_V end_POSTSUPERSCRIPT and 𝒪 i 𝒯 superscript subscript 𝒪 𝑖 𝒯\mathcal{O}_{i}^{\mathcal{T}}caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_T end_POSTSUPERSCRIPT indicate the objects in image and text inputs, respectively. Attr(⋅)Attr⋅\text{Attr}(\cdot)Attr ( ⋅ ) denotes the attributes of an object.
|
| 74 |
+
|
| 75 |
+
##### Relationship Conflict
|
| 76 |
+
|
| 77 |
+
The relationship conflict occurs when the visual and textual inputs describe the same objects with different relationships. For example, the textual input describes a cat on the table, while the image shows a cat on the floor. We formulate the relationship conflict in ⟨𝒱,𝒯⟩𝒱 𝒯\langle\mathcal{V},\mathcal{T}\rangle⟨ caligraphic_V , caligraphic_T ⟩ as a situation where
|
| 78 |
+
|
| 79 |
+
{Obj(𝒯)⊆Obj(𝒱){𝒪 i}i=1 m=Obj(𝒯)∩Obj(𝒱)Rel(𝒪 i 𝒯,𝒪 j 𝒯)≠Rel(𝒪 i 𝒱,𝒪 j 𝒱),i,j=1,2,…,m,\left\{\begin{aligned} \text{Obj}(\mathcal{T})&\subseteq\text{Obj}(\mathcal{V}% )\\ \{\mathcal{O}_{i}\}_{i=1}^{m}&=\text{Obj}(\mathcal{T})\cap\text{Obj}(\mathcal{% V})\\ \text{Rel}(\mathcal{O}_{i}^{\mathcal{T}},\mathcal{O}_{j}^{\mathcal{T}})&\neq% \text{Rel}(\mathcal{O}_{i}^{\mathcal{V}},\mathcal{O}_{j}^{\mathcal{V}}),i,j=1,% 2,...,m\end{aligned}\right.,{ start_ROW start_CELL Obj ( caligraphic_T ) end_CELL start_CELL ⊆ Obj ( caligraphic_V ) end_CELL end_ROW start_ROW start_CELL { caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT end_CELL start_CELL = Obj ( caligraphic_T ) ∩ Obj ( caligraphic_V ) end_CELL end_ROW start_ROW start_CELL Rel ( caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_T end_POSTSUPERSCRIPT , caligraphic_O start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_T end_POSTSUPERSCRIPT ) end_CELL start_CELL ≠ Rel ( caligraphic_O start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_V end_POSTSUPERSCRIPT , caligraphic_O start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_V end_POSTSUPERSCRIPT ) , italic_i , italic_j = 1 , 2 , … , italic_m end_CELL end_ROW ,(4)
|
| 80 |
+
|
| 81 |
+
where Rel(⋅)Rel⋅\text{Rel}(\cdot)Rel ( ⋅ ) denotes the relationships between two objects.
|
| 82 |
+
|
| 83 |
+
### 2.2 Data Construction
|
| 84 |
+
|
| 85 |
+
To simulate the modality conflict in vision-language tasks, we construct a dataset Multimodal Modality Conflict (MMMC) that contains all the three types of conflicts discussed above. Specifically, we collect images from the widely-used vision-language datasets, Visual Genome(Krishna et al., [2017](https://arxiv.org/html/2507.07151v1#bib.bib14)), and construct natural language questions conflicting with the image content and corresponding answers. Given the clear definition of modality conflict, we resort to the large language models 1 1 1 We use GPT-4o-mini, a powerful and fast model for data construction. to construct the dataset for modality conflict. The construction process is elaborated as follows.
|
| 86 |
+
|
| 87 |
+
##### Base Question Sampling
|
| 88 |
+
|
| 89 |
+
To align the format and style of questions to the original dataset, we adopt a substitution framework to simulate the modality conflict, inspired by Longpre et al. ([2021](https://arxiv.org/html/2507.07151v1#bib.bib24)). We first randomly sample a base question 𝒯 𝒯\mathcal{T}caligraphic_T from the original dataset for each image 𝒱 𝒱\mathcal{V}caligraphic_V. Key information in the base question will then be substituted with conflicting ones to construct a new question as discussed in the following.
|
| 90 |
+
|
| 91 |
+
##### Key Components Detection
|
| 92 |
+
|
| 93 |
+
Questions in vision-language tasks usually involve a series of components, including objects, attributes, and relationships. These components should be displayed in the image to ensure the question is answerable. However, in the modality conflict scenario, the components in the question may not be present in the image. We adopt the large language model to detect the objects in the image and extract the attributes and relationships of the objects.
|
| 94 |
+
|
| 95 |
+
##### Components Substitution
|
| 96 |
+
|
| 97 |
+
We substitute the objects, attributes, and relationships in the base question with conflicting information detected from the image. The substitution process is conducted by directly prompting a large language model to generate a counterfactual question according to the original question and the key components to be substituted. Additionally, we input all the objects, attributes, and relationships in the image, from the annotation of the original dataset to the model to ensure the conflict between the question and the image content.
|
| 98 |
+
|
| 99 |
+
##### Answer Generation
|
| 100 |
+
|
| 101 |
+
After obtaining the conflicting question 𝒯′superscript 𝒯′\mathcal{T}^{\prime}caligraphic_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, we generate a paired answer 𝒜′superscript 𝒜′\mathcal{A}^{\prime}caligraphic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT for the question. Unlike existing works that generate multiple-choice answers(Zhu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib42)), we collect model response directly to improve the model robustness in free-form generation. It is worth noting that, to avoid the impact of hallucinations in the widely-used large vision-language models, we do not directly generate the answer by inputting the image 𝒱 𝒱\mathcal{V}caligraphic_V and the question 𝒯′superscript 𝒯′\mathcal{T}^{\prime}caligraphic_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to them. Instead, we impose that the substituted components in the question are not present in the image on the large language model, and require the model to generate the answer 𝒜′superscript 𝒜′\mathcal{A}^{\prime}caligraphic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT based on the conflicting information. The large language model demonstrates the capability of generating answers based barely on textual information, as shown in[Figure 2](https://arxiv.org/html/2507.07151v1#S1.F2 "In 1 Introduction ‣ Robust Multimodal Large Language Models Against Modality Conflict").
|
| 102 |
+
|
| 103 |
+
##### Postprocessing
|
| 104 |
+
|
| 105 |
+
These generated questions and answers are then verified by human annotators to ensure the quality of the dataset. The language fluency, the conflict between the question and the image, and the correctness of the answer are all considered in the verification process. Finally, we obtain 20K image-question-answer triples in the MMMC dataset and randomly split them into 18K training samples and 2K testing samples. Visualizations for the statistics of MMMC is provided in[Appendix A](https://arxiv.org/html/2507.07151v1#A1 "Appendix A Visualization of MMMC ‣ Robust Multimodal Large Language Models Against Modality Conflict").
|
| 106 |
+
|
| 107 |
+
3 Method
|
| 108 |
+
--------
|
| 109 |
+
|
| 110 |
+
We propose three methods, _i.e._ prompt engineering, supervised fine-tuning, and reinforcement learning, to alleviate the hallucination caused by the modality conflict. We first formulate the vision-language task as a conditional generation problem:
|
| 111 |
+
|
| 112 |
+
𝒜∼π θ(𝒜|𝒱,𝒯)=∑t=1 T π θ(a t|𝒱,𝒯,a<t),similar-to 𝒜 subscript 𝜋 𝜃 conditional 𝒜 𝒱 𝒯 superscript subscript 𝑡 1 𝑇 subscript 𝜋 𝜃 conditional subscript 𝑎 𝑡 𝒱 𝒯 subscript 𝑎 absent 𝑡\mathcal{A}\sim\pi_{\theta}(\mathcal{A}|\mathcal{V},\mathcal{T})=\sum_{t=1}^{T% }\pi_{\theta}(a_{t}|\mathcal{V},\mathcal{T},a_{<t}),caligraphic_A ∼ italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( caligraphic_A | caligraphic_V , caligraphic_T ) = ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | caligraphic_V , caligraphic_T , italic_a start_POSTSUBSCRIPT < italic_t end_POSTSUBSCRIPT ) ,(5)
|
| 113 |
+
|
| 114 |
+
where the model π θ subscript 𝜋 𝜃\pi_{\theta}italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT is required to sequentially generate the answer 𝒜 𝒜\mathcal{A}caligraphic_A given the visual input 𝒱 𝒱\mathcal{V}caligraphic_V and the textual input 𝒯 𝒯\mathcal{T}caligraphic_T, and T 𝑇 T italic_T is the length of answer 𝒜 𝒜\mathcal{A}caligraphic_A. We then introduce the three methods to improve the robustness of the model against modality conflict in this section.
|
| 115 |
+
|
| 116 |
+
### 3.1 Prompt Engineering
|
| 117 |
+
|
| 118 |
+
Instruction following(Dai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib5); Liu et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib19), [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20)) is a fundamental capability of MLLMs. Questions are directly inputted to the model to guide the generation of the answer in[Equation 5](https://arxiv.org/html/2507.07151v1#S3.E5 "In 3 Method ‣ Robust Multimodal Large Language Models Against Modality Conflict"). We propose to instruct the model to check if the objects, attributes, and relationships in the question are present in the image before generating the answer with a simple but effective prompt template p(𝒯)𝑝 𝒯 p(\mathcal{T})italic_p ( caligraphic_T ):
|
| 119 |
+
|
| 120 |
+
> Please check if the image contains mentioned information and answer the question:𝒯 𝒯\mathcal{T}caligraphic_T
|
| 121 |
+
|
| 122 |
+
The prompt engineering method is easy to implement and does not require additional data or computational resources, formulated as
|
| 123 |
+
|
| 124 |
+
𝒜∼π θ(𝒜|𝒱,p(𝒯)).similar-to 𝒜 subscript 𝜋 𝜃 conditional 𝒜 𝒱 𝑝 𝒯\mathcal{A}\sim\pi_{\theta}(\mathcal{A}|\mathcal{V},p(\mathcal{T})).caligraphic_A ∼ italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( caligraphic_A | caligraphic_V , italic_p ( caligraphic_T ) ) .(6)
|
| 125 |
+
|
| 126 |
+
However, the prompt engineering method may not be effective in all cases. The performance of the method is heavily dependent on the foundation model and the quality of the prompt. Besides, the potential of training data is not exploited in the prompt engineering method. Therefore, we explore methods with additional training to fully leverage the data and improve the robustness of the model against modality conflict.
|
| 127 |
+
|
| 128 |
+
### 3.2 Supervised Fine-Tuning
|
| 129 |
+
|
| 130 |
+
Supervised fine-tuning aims to learn a mapping from the input to the output by minimizing the discrepancy between the model predictions and the ground-truth labels. Existing works have shown the superiority of supervised fine-tuning in conquering knowledge conflict in LLMs(Longpre et al., [2021](https://arxiv.org/html/2507.07151v1#bib.bib24)).
|
| 131 |
+
|
| 132 |
+
We propose to fine-tune the model on the MMMC dataset with the language modeling objective, formulated as
|
| 133 |
+
|
| 134 |
+
π θ∗=argmin θ𝔼⟨𝒱,𝒯,𝒜⟩∼𝒟[−logπ θ(𝒜|𝒱,𝒯)],superscript subscript 𝜋 𝜃 subscript 𝜃 subscript 𝔼 similar-to 𝒱 𝒯 𝒜 𝒟 delimited-[]subscript 𝜋 𝜃 conditional 𝒜 𝒱 𝒯\pi_{\theta}^{*}=\arg\min_{\theta}\mathbb{E}_{\langle\mathcal{V},\mathcal{T},% \mathcal{A}\rangle\sim\mathcal{D}}\left[-\log\pi_{\theta}(\mathcal{A}|\mathcal% {V},\mathcal{T})\right],italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT ⟨ caligraphic_V , caligraphic_T , caligraphic_A ⟩ ∼ caligraphic_D end_POSTSUBSCRIPT [ - roman_log italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( caligraphic_A | caligraphic_V , caligraphic_T ) ] ,(7)
|
| 135 |
+
|
| 136 |
+
where ⟨𝒱,𝒯,𝒜⟩𝒱 𝒯 𝒜\langle\mathcal{V},\mathcal{T},\mathcal{A}\rangle⟨ caligraphic_V , caligraphic_T , caligraphic_A ⟩ is a triplet of image, question, and answer in the MMMC dataset 𝒟 𝒟\mathcal{D}caligraphic_D. With this objective, the model is optimized by gradient descent to align the model predictions with the ground-truth labels, which is expected to improve the robustness of the model against modality conflict.
|
| 137 |
+
|
| 138 |
+
Despite its effectiveness, supervised fine-tuning mainly emphasizes adapting the style of the model to the target domain(Zhou et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib41)), while the performance improvement on the unseen data may be limited.
|
| 139 |
+
|
| 140 |
+
### 3.3 Reinforcement Learning
|
| 141 |
+
|
| 142 |
+
Inspired by the success of reinforcement learning in alignment with human preference(Ouyang et al., [2022](https://arxiv.org/html/2507.07151v1#bib.bib26); Stiennon et al., [2020](https://arxiv.org/html/2507.07151v1#bib.bib29); Yu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib35)) and improving the robustness of large language models(Zhang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib38)), we resort to reinforcement learning to further improve the robustness of the model against modality conflict. Specifically, the conditional generation problem in[Equation 5](https://arxiv.org/html/2507.07151v1#S3.E5 "In 3 Method ‣ Robust Multimodal Large Language Models Against Modality Conflict") can be formulated as a Markov Decision Process (MDP):
|
| 143 |
+
|
| 144 |
+
𝒜∼π θ(𝒜|𝒱,𝒯)⇔⟨S,A,r,P,ρ 0,γ⟩,⇔similar-to 𝒜 subscript 𝜋 𝜃 conditional 𝒜 𝒱 𝒯 𝑆 𝐴 𝑟 𝑃 subscript 𝜌 0 𝛾\mathcal{A}\sim\pi_{\theta}(\mathcal{A}|\mathcal{V},\mathcal{T})% \Leftrightarrow\langle S,A,r,P,\rho_{0},\gamma\rangle,caligraphic_A ∼ italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( caligraphic_A | caligraphic_V , caligraphic_T ) ⇔ ⟨ italic_S , italic_A , italic_r , italic_P , italic_ρ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_γ ⟩ ,(8)
|
| 145 |
+
|
| 146 |
+
with the state s t=(𝒱,𝒯,a<t)subscript 𝑠 𝑡 𝒱 𝒯 subscript 𝑎 absent 𝑡 s_{t}=(\mathcal{V},\mathcal{T},a_{<t})italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ( caligraphic_V , caligraphic_T , italic_a start_POSTSUBSCRIPT < italic_t end_POSTSUBSCRIPT ), the action a t subscript 𝑎 𝑡 a_{t}italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, the reward r t subscript 𝑟 𝑡 r_{t}italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, the transition probability P(s t+1|s t,a t)𝑃 conditional subscript 𝑠 𝑡 1 subscript 𝑠 𝑡 subscript 𝑎 𝑡 P(s_{t+1}|s_{t},a_{t})italic_P ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ), the initial state distribution ρ 0(s 0):⟨𝒱,𝒯⟩∼𝒟:subscript 𝜌 0 subscript 𝑠 0 similar-to 𝒱 𝒯 𝒟\rho_{0}(s_{0}):\langle\mathcal{V},\mathcal{T}\rangle\sim\mathcal{D}italic_ρ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) : ⟨ caligraphic_V , caligraphic_T ⟩ ∼ caligraphic_D, and the discount factor γ 𝛾\gamma italic_γ. We propose to optimize the model with reinforcement learning by maximizing the expected reward, formulated as
|
| 147 |
+
|
| 148 |
+
π θ∗=argmax θ𝔼 s 0∼ρ 0𝔼 a t∼π θ(a t|s t)[∑t=1 T γ tr t].superscript subscript 𝜋 𝜃 subscript 𝜃 subscript 𝔼 similar-to subscript 𝑠 0 subscript 𝜌 0 subscript 𝔼 similar-to subscript 𝑎 𝑡 subscript 𝜋 𝜃 conditional subscript 𝑎 𝑡 subscript 𝑠 𝑡 delimited-[]superscript subscript 𝑡 1 𝑇 superscript 𝛾 𝑡 subscript 𝑟 𝑡\pi_{\theta}^{*}=\arg\max_{\theta}\mathbb{E}_{s_{0}\sim\rho_{0}}\mathbb{E}_{a_% {t}\sim\pi_{\theta}(a_{t}|s_{t})}\left[\sum_{t=1}^{T}\gamma^{t}r_{t}\right].italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = roman_arg roman_max start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_ρ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∼ italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT [ ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_γ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] .(9)
|
| 149 |
+
|
| 150 |
+
To realize the goal of alleviating the hallucination caused by the modality conflict, we assign a reward function that encourages the model to generate answers semantically consistent with the one in the MMMC dataset and penalizes the model for generating hallucinated responses. The reward function is defined as
|
| 151 |
+
|
| 152 |
+
r t={+1,ift=T∧a≤tis consistent with𝒜−1,ift=T∧a≤tis not consistent with𝒜 0,otherwise,r_{t}=\left\{\begin{aligned} +1,&\text{ if }t=T\land a_{\leq t}\text{ is % consistent with }\mathcal{A}\\ -1,&\text{ if }t=T\land a_{\leq t}\text{ is not consistent with }\mathcal{A}\\ 0,&\text{ otherwise}\end{aligned}\right.,italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = { start_ROW start_CELL + 1 , end_CELL start_CELL if italic_t = italic_T ∧ italic_a start_POSTSUBSCRIPT ≤ italic_t end_POSTSUBSCRIPT is consistent with caligraphic_A end_CELL end_ROW start_ROW start_CELL - 1 , end_CELL start_CELL if italic_t = italic_T ∧ italic_a start_POSTSUBSCRIPT ≤ italic_t end_POSTSUBSCRIPT is not consistent with caligraphic_A end_CELL end_ROW start_ROW start_CELL 0 , end_CELL start_CELL otherwise end_CELL end_ROW ,(10)
|
| 153 |
+
|
| 154 |
+
where a≤t subscript 𝑎 absent 𝑡 a_{\leq t}italic_a start_POSTSUBSCRIPT ≤ italic_t end_POSTSUBSCRIPT denotes the generated answer at time step t 𝑡 t italic_t and 𝒜 𝒜\mathcal{A}caligraphic_A is the ground-truth answer in the MMMC dataset. We prompt a pretrained large language model to judge the semantic consistency between the generated and ground-truth answer and assign the reward based on the judgment. Detailed prompts are listed in[Appendix B](https://arxiv.org/html/2507.07151v1#A2 "Appendix B Prompts ‣ Robust Multimodal Large Language Models Against Modality Conflict").
|
| 155 |
+
|
| 156 |
+
With these base components, we can optimize the model with arbitrary reinforcement learning algorithms, such as Proximal Policy Optimization (PPO)(Schulman et al., [2017](https://arxiv.org/html/2507.07151v1#bib.bib27)) and REINFORCE(Williams, [1992](https://arxiv.org/html/2507.07151v1#bib.bib31)). We adopt an optimized version of REINFORCE algorithm, REINFORCE++(Hu, [2025](https://arxiv.org/html/2507.07151v1#bib.bib9)), for light computation and good performance.
|
| 157 |
+
|
| 158 |
+
In the reinforcement learning method, the model is optimized by interacting with the environment and receiving rewards based on the quality of the generated answers. Due to the nature of sampling data from the model itself, the reinforcement learning method is expected to learn more diverse and robust answers that share similar semantics with the ground-truth answers.
|
| 159 |
+
|
| 160 |
+
4 Experiments
|
| 161 |
+
-------------
|
| 162 |
+
|
| 163 |
+
### 4.1 Setup
|
| 164 |
+
|
| 165 |
+
##### Models
|
| 166 |
+
|
| 167 |
+
We evaluate several types of prevalent MLLMs, InstructBLIP(Dai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib5)), LlaVA-v1.5(Liu et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib19)), LLaVA-NeXT(Liu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20)), and Qwen2-VL-Instruct(Wang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib30)) series, on the MMMC dataset. InstructBLIP adopts Q-former(Li et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib16)) to compress the image into 32 tokens and bridge the vision and language features, while LLaVA-v1.5, LLaVA-NeXT and Qwen2-VL-Instruct separately encode the image and text with transformer architecture and conduct multimodal reasoning with additional adapter modules. We use 7B version for each model in the evaluation. Besides, to investigate the impact of model size, we also evaluate the 2B version of Qwen2-VL-Instruct. Additionally, we include the widely-used large language model GPT-4o as a baseline.
|
| 168 |
+
|
| 169 |
+
##### Implementation Details
|
| 170 |
+
|
| 171 |
+
We implement all proposed method using Hugging Face Transformers and OpenRLHF library(Hu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib10)). For the supervised fine-tuning, we use the Adam optimizer with a learning rate of 5×10−6 5 superscript 10 6 5\times 10^{-6}5 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT and a batch size of 8. We train the model for 1 epochs on the MMMC dataset with 10000 training samples except for the ablation study. For the reinforcement learning, we use the Adam optimizer with a learning rate of 9.65×10−6 9.65 superscript 10 6 9.65\times 10^{-6}9.65 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT and a batch size of 8. We train the model on the MMMC dataset with only 1000 training samples since longer reinforcement learning will cause the model collapse. We set the KL coefficient to 0.01 and the max response length to 128. Both the supervised fine-tuning and reinforcement learning methods are trained with LoRA(Hu et al., [2021](https://arxiv.org/html/2507.07151v1#bib.bib8)). We use the Llama-3.3-70B-Instruct for reward model.
|
| 172 |
+
|
| 173 |
+
Table 1: Explanation for each level of overall quality scores in the LLM-as-a-Judge evaluation.
|
| 174 |
+
|
| 175 |
+
Score Quality Detailed Description
|
| 176 |
+
0 Not Valid Unnatural, incoherent or unreadable
|
| 177 |
+
1 Terrible Irrelevant to the question asked
|
| 178 |
+
2 Wrong Different from the reference answer,but still relevant to the question
|
| 179 |
+
3 Right Has the same meaning as the reference,but may be phrased differently
|
| 180 |
+
4 Excellent Same as the reference or more naturally
|
| 181 |
+
|
| 182 |
+
##### Evaluation Protocol
|
| 183 |
+
|
| 184 |
+
Given the reference responses in the MMMC dataset, we adopt the widely-used ROUGE-L(Lin, [2004](https://arxiv.org/html/2507.07151v1#bib.bib17)) F-measure to evaluate the longest common subsequences overlap between the model responses and the reference responses. However, this traditional metric do not consider the semantic similarity in language precisely.
|
| 185 |
+
|
| 186 |
+
To present a more intuitive evaluation, we adopt the LLM-as-a-Judge(Zheng et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib40)), a large language model that is pretrained on a large-scale dataset and fine-tuned on human preference data, to evaluate the quality of the model responses. Concretely, to evaluate the robustness of the model against modality conflict, we calculate the hallucination rate (Hallu-Rate), defined as the percentage of hallucinated responses in the model responses. A response is considered hallucinated if it erroneously assumes the existence of objects, attributes, or relationships that not present in the image, presenting plausible but incorrect information.
|
| 187 |
+
|
| 188 |
+
Additionally, we require LLM-judge to evaluate the overall quality of the model responses concerning fluency, relevance, and correctness, represented by a score ranging from 0 to 4. We list the criteria for these scores in[Table 1](https://arxiv.org/html/2507.07151v1#S4.T1 "In Implementation Details ‣ 4.1 Setup ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"). The average scores of the LLM-judge are reported as LLM-Judge. To obtain more robust results, we adopt strong closed-source model GPT-4o series and open-source model Llama-3.3-70B to perform evaluations of Hallu-Rate and LLM-Judge. All prompts we used in the evaluation are listed in[Appendix B](https://arxiv.org/html/2507.07151v1#A2 "Appendix B Prompts ‣ Robust Multimodal Large Language Models Against Modality Conflict").
|
| 189 |
+
|
| 190 |
+
### 4.2 Main Results
|
| 191 |
+
|
| 192 |
+
Table 2: Performance comparison of different methods on the MMMC dataset. We conduct the experiments on Prompt Engineering (PE), Supervised Fine-Tuning (SFT), and Reinforcement Learning (RL). The performance of GPT-4o is also reported for comparison. The results of SFT and RL are averaged across three runs with different seeds, and the standard deviations are reported in parentheses. ↑↑\uparrow↑ denotes the higher the better, while ↓↓\downarrow↓ denotes the lower the better. The best performance for each model is highlighted in bold.
|
| 193 |
+
|
| 194 |
+
Model Method ROUGE-L (%) ↑↑\uparrow↑Hallu-Rate (%) ↓↓\downarrow↓ (Llama)Hallu-Rate (%) ↓↓\downarrow↓ (GPT)LLM-Judge ↑↑\uparrow↑ (Llama)LLM-Judge ↑↑\uparrow↑ (GPT)
|
| 195 |
+
GPT-4o Base 23.76 59.40 57.00 2.12 2.39
|
| 196 |
+
PE 23.98 60.10 56.95 2.13 2.42
|
| 197 |
+
InstructBLIP-7B Base 13.89 82.10 70.55 1.81 1.85
|
| 198 |
+
PE 13.89 82.30 69.50 1.79 1.86
|
| 199 |
+
SFT 8.86 (0.43)85.48 (0.37)70.33 (0.74)1.81(0.01)1.76 (0.05)
|
| 200 |
+
RL 5.65 (3.08)57.62(18.58)57.18(18.07)1.01 (0.69)1.48 (0.96)
|
| 201 |
+
LLaVA-v1.5-7B Base 28.54 93.25 83.60 1.73 1.81
|
| 202 |
+
PE 25.83 86.95 84.70 1.94 1.93
|
| 203 |
+
SFT 16.90 (0.52)59.37 (1.02)52.28 (0.92)2.27 (0.02)2.27 (0.02)
|
| 204 |
+
RL 23.53 (3.49)33.87(2.53)29.78(2.04)2.58(0.04)2.74(0.04)
|
| 205 |
+
LLaVA-NeXT-7B Base 18.08 69.65 67.00 1.92 2.24
|
| 206 |
+
PE 20.91 50.50 50.00 2.43 2.69
|
| 207 |
+
SFT 22.25 (0.07)45.93 (0.47)42.83 (0.78)2.48 (0.01)2.44 (0.04)
|
| 208 |
+
RL 25.52(1.66)33.83(1.99)31.27(2.03)2.65(0.02)2.86(0.05)
|
| 209 |
+
Qwen2-VL-Instruct-2B Base 25.20 46.55 40.55 2.07 2.26
|
| 210 |
+
PE 30.12 62.10 59.95 2.26 2.40
|
| 211 |
+
SFT 29.32 (0.25)26.85 (0.60)32.78 (0.74)2.71 (0.02)2.76 (0.02)
|
| 212 |
+
RL 22.65 (1.65)18.00(5.19)16.78(4.30)2.73(0.06)2.97(0.08)
|
| 213 |
+
Qwen2-VL-Instruct-7B Base 24.73 52.35 47.95 2.25 2.47
|
| 214 |
+
PE 28.65 40.10 37.35 2.52 2.80
|
| 215 |
+
SFT 28.60 (0.10)28.58 (0.34)32.02 (0.69)2.71(0.01)2.74 (0.02)
|
| 216 |
+
RL 18.89 (0.82)23.52(5.63)20.45(5.09)2.66 (0.07)2.86(0.10)
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
|
| 220 |
+
Figure 3: Visualization of the alignment tax for supervised fine-tuning (SFT) and reinforcement learning (RL). We plot the performance of the base model (Base) with blue dashed regular polygon, and the performance of the SFT and RL models with orange and green solid polygons, respectively. All scores are normalized to the base model for intuitive comparison.
|
| 221 |
+
|
| 222 |
+
##### Robustness of Prevalent Foundation Models Against modality conflict
|
| 223 |
+
|
| 224 |
+
As the “Base” results in[Table 2](https://arxiv.org/html/2507.07151v1#S4.T2 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict") show, the prevalent MLLMs, InstructBLIP-7B, LLaVA-v1.5-7B, LLaVA-NeXT-7B, Qwen2-VL-Instruct-7B and even GPT-4o, perform poorly on the MMMC dataset. All models exhibit Hallu-Rate over 40%, indicating that they are prone to hallucinations under modality conflict. The LLM-Judge scores are lower than 2.5, showing that most of their responses are judged as wrong or of lower quality. The ROUGE-L scores are also relatively low, suggesting that the model responses are not sufficiently aligned with the reference responses.
|
| 225 |
+
|
| 226 |
+
##### Performance Improvements with Proposed Methods
|
| 227 |
+
|
| 228 |
+
We then evaluate the effectiveness of the proposed methods, prompt engineering, supervised fine-tuning, and reinforcement learning, on the MMMC dataset. As shown in[Table 2](https://arxiv.org/html/2507.07151v1#S4.T2 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"), these three methods significantly improve the robustness of the prevalent MLLMs on the MMMC dataset. The Hallu-Rate is reduced by 10% to 50% with the proposed methods. The LLM-Judge scores are improved by 0.4 to 0.9, indicating that the overall quality of the model responses is enhanced. The consistent conclusions provided by the LLM judge, which is based on GPT-4o and Llama-3.3-70B, further validate the reliability of our evaluation results. The ROUGE-L scores are also improved with several methods, showing that the model responses are more aligned with the reference responses. We will further analyze the merits and demerits of these methods in the following section.
|
| 229 |
+
|
| 230 |
+
### 4.3 Analysis
|
| 231 |
+
|
| 232 |
+
##### Further Analysis on Proposed Methods
|
| 233 |
+
|
| 234 |
+
Prompt engineering is a basic method that may improve the robustness of the model responses against modality conflict, bring reduced Hallu-Rate, and improve LLM-Judge scores in most cases. It is easy to implement and does not require additional data or computational resources. However, as shown in[Table 2](https://arxiv.org/html/2507.07151v1#S4.T2 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"), the improvement of prompt engineering is heavily dependent on the foundation model, and the performance may be unstable. Prompt engineering brings significant improvement to Qwen2-VL-Instruct-7B and LLaVA series, but increases the Hallu-Rate of smaller Qwen2-VL-Instruct-2B model. The performance on InstructBLIP is nearly the same as the base model. We inspect the generated responses of InstructBLIP and find that the model tends to generate short and simple responses whatever the prompt is, which may lead to the limited improvement of prompt engineering. The effectiveness of prompt engineering is also limited on GPT-4o due to the over-robustness of the model, with which the model tends to generate similar responses for different expressions of the same instruction.
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
|
| 238 |
+
(a)SFT
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
|
| 242 |
+
(b)RL
|
| 243 |
+
|
| 244 |
+
Figure 4: Training curves of supervised fine-tuning (SFT) and reinforcement learning (RL) on the MMMC dataset. The training loss of SFT, reward, response length and mean KL divergence of RL are plotted. We plot the average training curves over three runs with different seeds. The solid lines and shaded areas represent the mean values and standard deviations, respectively. All curves are smoothed with exponential moving average for better visualization.
|
| 245 |
+
|
| 246 |
+
Supervised fine-tuning (SFT) is a more advanced method that can further improve the robustness of the model responses. It requires additional data and computational resources for fine-tuning, but it can achieve better performance than prompt engineering. As shown in[Table 2](https://arxiv.org/html/2507.07151v1#S4.T2 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"), SFT reduces the Hallu-Rate and improves the LLM-Judge scores of LLaVA-v1.5, LlaVA-NeXT and Qwen2-VL-Instruct series. However, InstructBLIP suffers from performance loss after SFT. We speculate that the pre-training of InstructBLIP does not inject the capability of recognizing the conflicts between modalities and the fine-tuning data in MMMC is also not enough to teach this new skill.
|
| 247 |
+
|
| 248 |
+
Besides, SFT restricts the model behavior to the fine-tuning data, which may lead to overfitting and limited generalization. Reinforcement learning (RL) samples the responses from the model itself and provides more diverse and informative data for training. It requires more computational resources but may achieve better performance than SFT. As shown in[Table 2](https://arxiv.org/html/2507.07151v1#S4.T2 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"), RL dramatically reduces the Hallu-Rate and improves the LLM-Judge scores of all models, especially on Qwen2-VL-Instruct series. The main reason for the best performance of RL is that it explores more diverse responses in the training process than SFT, which helps the model to recognize the conflicts between modalities and enhance its robustness.
|
| 249 |
+
|
| 250 |
+
##### Performance Breakdowns for Different Conflict Types
|
| 251 |
+
|
| 252 |
+
In order to gain a deeper understanding of how each approach tackles various conflict types, we provide a detailed performance analysis for each category of conflict in[Appendix C](https://arxiv.org/html/2507.07151v1#A3 "Appendix C Performance on Separate Conflict Types ‣ Robust Multimodal Large Language Models Against Modality Conflict"). The analyses indicate that the conclusions drawn from individual subsets of conflict types align closely with those derived from the entire dataset. Particularly noteworthy is the finding that MLLMs exhibit superior performance on object-conflict types. On the other hand, attribute-conflict scenarios present a moderate level of difficulty, and relationship-conflict types pose a significant challenge for MLLMs. The performance on these conflicts is notably poorer when compared to object and attribute conflicts. This drop in performance can be attributed to the intricate relational dynamics that the models struggle to accurately interpret and predict. These observations suggest that while substantial advancements have been made in processing simpler conflict types such as object-conflicts, there remains a critical need for enhancement in managing relationship-focused conflicts.
|
| 253 |
+
|
| 254 |
+
##### Alignment Tax
|
| 255 |
+
|
| 256 |
+
Both the SFT and RL methods require parameter updates to align the model with the training data, and thus introduce the alignment tax, which is defined as the performance loss of the model on the original task after the fine-tuning(Ouyang et al., [2022](https://arxiv.org/html/2507.07151v1#bib.bib26)). To analyze the alignment tax of our methods, we test the performance changes of the models on a wide range of vision-language tasks after the fine-tuning, including HallusionBench(Guan et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib7)), MMBench(Liu et al., [2025](https://arxiv.org/html/2507.07151v1#bib.bib23)), MMStar(Chen et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib4)), MMMU(Yue et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib37)), MathVista(Lu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib25)), OCRBench(Liu et al., [2024d](https://arxiv.org/html/2507.07151v1#bib.bib22)), AI2D(Kembhavi et al., [2016](https://arxiv.org/html/2507.07151v1#bib.bib13)), MMVet(Yu et al., [2024c](https://arxiv.org/html/2507.07151v1#bib.bib36)) and MME(Fu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib6)).
|
| 257 |
+
|
| 258 |
+
We visualize the performance change of the SFT and RL methods in[Figure 3](https://arxiv.org/html/2507.07151v1#S4.F3 "In 4.2 Main Results ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"). As the figure shows, the alignment tax is also heavily dependent on the foundation model. For example, InstructBLIP suffers a lot from the alignment tax, showing half of the performance loss on MMBench with SFT and three-quarters of the performance loss on AI2D and MMVet with RL. By contrast, Qwen2-VL-Instruct-7B series shows negligible performance change on most tasks after the fine-tuning, indicating that the model is more robust to the alignment tax. Surprisingly, the performance of LLaVA-NeXT on HallusionBench is even improved after SFT and RL, and Qwen2-VL-Instruct-2B shows similar changes on MMVet with SFT. LLaVA-v1.5-7B is the most stable model, demonstrating consistent performance across multiple benchmarks without much variation. These results are also strong evidences that the capability of recognizing the conflicts between modalities is beneficial for the model to reduce other types of hallucinations.
|
| 259 |
+
|
| 260 |
+
##### Training Stability
|
| 261 |
+
|
| 262 |
+
We further analyze the training stability of the SFT and RL methods on the MMMC dataset. As shown in[Figure 4](https://arxiv.org/html/2507.07151v1#S4.F4 "In Further Analysis on Proposed Methods ‣ 4.3 Analysis ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"), in general, the training loss of SFT is relatively stable, while the reward of RL fluctuates a lot. The response length of RL is also unstable, indicating that the model may generate responses with different lengths during the training process. This clearly shows that although RL can potentially reach higher performance, it is less stable than SFT.
|
| 263 |
+
|
| 264 |
+
The most noticeable phenomenon is that InstructBLIP-7B’s response length experiences a jump at around 300 episodes to the maximum length we set. Meanwhile, it is also the time when the reward of RL reaches the valley bottom and the mean Kullback-Leibler (KL) divergence reaches a peak. To further investigate the reason for this phenomenon, we analyze the generated responses of InstructBLIP-7B and find that, at this time, the model begins to generate longer responses with tedious, repeated, and irrelevant information. We deem that the model may fall into the local optimum but is completely collapsed at that time. Some responses generated by the model are shown in[Appendix D](https://arxiv.org/html/2507.07151v1#A4 "Appendix D Examples ‣ Robust Multimodal Large Language Models Against Modality Conflict") for a better understanding of the phenomenon. RL training curves of other models are more stable, showing that the model is well-trained with the reward signal. The fluctuation of the reward and response length may be caused by the exploration of the model in the training process, which helps the model to recognize the conflicts between different modalities and enhance its robustness.
|
| 265 |
+
|
| 266 |
+
Table 3: Hallu-Rate and LLM-Judge of the LLaVA-NeXT-7B model with different training episodes on the MMMC dataset. Both Hullu-Rate and LLM-Judge are evaluate by GPT-4o series.
|
| 267 |
+
|
| 268 |
+
Training Episodes Hallu-Rate ↓↓\downarrow↓LLM-Judge ↑↑\uparrow↑
|
| 269 |
+
2000 63.90 2.31
|
| 270 |
+
5000 40.30 2.71
|
| 271 |
+
10000 28.50 2.86
|
| 272 |
+
20000 30.95 2.78
|
| 273 |
+
100000 25.55 2.81
|
| 274 |
+
|
| 275 |
+
##### Impact of Training Episodes Number on RL
|
| 276 |
+
|
| 277 |
+
As the training of RL is unstable and prone to model collapse, we delve into a fundamental aspect of RL training: the impact of training episode number on the model performance. We conduct experiments on the LLaVA-NeXT-7B model with different training episodes and report the Hallu-Rate and LLM-Judge scores in[Table 3](https://arxiv.org/html/2507.07151v1#S4.T3 "In Training Stability ‣ 4.3 Analysis ‣ 4 Experiments ‣ Robust Multimodal Large Language Models Against Modality Conflict"). As the table shows, the Hallu-Rate is reduced by 20% with the increase of training episodes from 2000 to 10000, and the LLM-Judge scores are improved by 0.5. However, the performance is not further improved with more training episodes, indicating that the model may fall into the local optimum after 10000 episodes. We speculate that the model may need more diverse and informative data for training to further improve the robustness against modality conflict.
|
| 278 |
+
|
| 279 |
+
5 Related Work
|
| 280 |
+
--------------
|
| 281 |
+
|
| 282 |
+
### 5.1 Multimodal Large Language Models
|
| 283 |
+
|
| 284 |
+
With the significant progress of large language models, multimodal large language models (MLLMs) have been developed based on the language capabilities of large language models and the visual understanding of large vision models. Given the pretrained language and vision models, training of most MLLMs involves a pretraining stage to align the features of different modalities(Bai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib2); Li et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib16); Alayrac et al., [2022](https://arxiv.org/html/2507.07151v1#bib.bib1)), and a fine-tuning stage to inject the instruction following abilities into the model(Dai et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib5); Liu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib20); Guan et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib7); Liu et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib19)). Despite their success in various vision-language tasks, MLLMs are prone to hallucinations(Ji et al., [2023](https://arxiv.org/html/2507.07151v1#bib.bib12)), where the model generates content contradicting the input.
|
| 285 |
+
|
| 286 |
+
### 5.2 Hallucinations in MLLMs
|
| 287 |
+
|
| 288 |
+
Plenty of works have been proposed to alleviate hallucinations in MLLMs from the perspective of training data(Liu et al., [2024a](https://arxiv.org/html/2507.07151v1#bib.bib18); Yu et al., [2024a](https://arxiv.org/html/2507.07151v1#bib.bib34)), decoding strategies(Leng et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib15); Huang et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib11)), and human preference alignment(Zhao et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib39); Yu et al., [2024b](https://arxiv.org/html/2507.07151v1#bib.bib35)). However, these efforts mainly focus on the conflicts between the model responses and the inputs, neglecting a possible source of hallucinations: the conflicts between the inputs from different modalities. Even with the capability of perfectly aligning features, MLLMs will fall into a dilemma when facing intrinsically conflicted information. This paper aims to investigate the hallucination phenomenon in MLLMs from the perspective of modality conflict.
|
| 289 |
+
|
| 290 |
+
### 5.3 Knowledge Conflict
|
| 291 |
+
|
| 292 |
+
Knowledge conflict(Xu et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib33); Longpre et al., [2021](https://arxiv.org/html/2507.07151v1#bib.bib24); Chen et al., [2022](https://arxiv.org/html/2507.07151v1#bib.bib3); Xie et al., [2024](https://arxiv.org/html/2507.07151v1#bib.bib32)) is a long-discussed topic in the area of large language models. Longpre et al. ([2021](https://arxiv.org/html/2507.07151v1#bib.bib24)) formalizes the problem of knowledge conflicts between the contextual and the learned information. Chen et al. ([2022](https://arxiv.org/html/2507.07151v1#bib.bib3)) extends the problem to multiple source context scenarios and proposes a calibration model to detect the phenomenon. Analogically, conflicts emerge when inconsistent information is presented in multimodal tasks, leading to hallucinations in most MLLMs. Zhu et al. ([2024](https://arxiv.org/html/2507.07151v1#bib.bib42)) defines the problem of cross-modality parametric knowledge conflict, detects the problem with multiple-choice question answering, and proposes a dynamic contrastive decoding method to mitigate the impact of the conflicts. Liu et al. ([2024c](https://arxiv.org/html/2507.07151v1#bib.bib21)) specifies the contradiction between visual information and commonsense knowledge in the language model. These efforts in MLLMs neglect the impact of intrinsic conflict between modalities that lead MLLMs to hallucination. By contrast, we formalize the concept of modality conflict and collect a dataset to simulate this situation and evaluate the robustness of prevalent MLLMs against it.
|
| 293 |
+
|
| 294 |
+
6 Conclusion
|
| 295 |
+
------------
|
| 296 |
+
|
| 297 |
+
In this paper, we investigate the hallucination phenomenon in multimodal large language models (MLLMs) from the perspective of modality conflict. We first give a formal definition of knowledge conflicts in vision-language tasks and construct a dataset, named MMMC. We then propose three methods, _i.e._ prompt engineering, supervised fine-tuning, and reinforcement learning, to alleviate the hallucination caused by the modality conflict. We evaluate the proposed methods on the MMMC dataset and analyze the results concerning the overlap with the reference responses, the hallucination rate, and the overall response quality. The results show that the proposed methods significantly improve the robustness of the prevalent MLLMs on the MMMC dataset. We further analyze the merits and demerits of these methods and provide more insights for future research.
|
| 298 |
+
|
| 299 |
+
Acknowledgements
|
| 300 |
+
----------------
|
| 301 |
+
|
| 302 |
+
This work was supported by National Key R&D Program of China under Contract 2022ZD0119802, National Natural Science Foundation of China under Contract 623B2097, and the Youth Innovation Promotion Association CAS. It was also supported by the GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC.
|
| 303 |
+
|
| 304 |
+
Impact Statement
|
| 305 |
+
----------------
|
| 306 |
+
|
| 307 |
+
This paper presents work with a goal to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
|
| 308 |
+
|
| 309 |
+
References
|
| 310 |
+
----------
|
| 311 |
+
|
| 312 |
+
* Alayrac et al. (2022) Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Cabi, S., Han, T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., Brock, A., Nematzadeh, A., Sharifzadeh, S., Binkowski, M., Barreira, R., Vinyals, O., Zisserman, A., and Simonyan, K. Flamingo: a visual language model for few-shot learning. In _Proceedings of the Advances in Neural Information Processing Systems_, 2022.
|
| 313 |
+
* Bai et al. (2023) Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., and Zhou, J. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. _arXiv:2308.12966_, 2023.
|
| 314 |
+
* Chen et al. (2022) Chen, H.-T., Zhang, M., and Choi, E. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 2022.
|
| 315 |
+
* Chen et al. (2024) Chen, L., Li, J., Dong, X., Zhang, P., Zang, Y., Chen, Z., Duan, H., Wang, J., Qiao, Y., Lin, D., and Zhao, F. Are we on the right way for evaluating large vision-language models? In _Proceedings of the Advances in Neural Information Processing Systems_, 2024.
|
| 316 |
+
* Dai et al. (2023) Dai, W., Li, J., Li, D., Tiong, A. M.H., Zhao, J., Wang, W., Li, B., Fung, P., and Hoi, S. Instructblip: Towards general-purpose vision-language models with instruction tuning. In _Proceedings of the Advances in Neural Information Processing Systems_, 2023.
|
| 317 |
+
* Fu et al. (2024) Fu, C., Chen, P., Shen, Y., Qin, Y., Zhang, M., Lin, X., Yang, J., Zheng, X., Li, K., Sun, X., Wu, Y., and Ji, R. MME: A comprehensive evaluation benchmark for multimodal large language models. _arXiv:2306.13394_, 2024.
|
| 318 |
+
* Guan et al. (2024) Guan, T., Liu, F., Wu, X., Xian, R., Li, Z., Liu, X., Wang, X., Chen, L., Huang, F., Yacoob, Y., Manocha, D., and Zhou, T. HallusionBench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. IEEE, 2024.
|
| 319 |
+
* Hu et al. (2021) Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. _arXiv:2106.09685_, 2021.
|
| 320 |
+
* Hu (2025) Hu, J. REINFORCE++: A simple and efficient approach for aligning large language models. _arXiv:2501.03262_, 2025.
|
| 321 |
+
* Hu et al. (2024) Hu, J., Wu, X., Zhu, Z., Xianyu, Wang, W., Zhang, D., and Cao, Y. OpenRLHF: An easy-to-use, scalable and high-performance rlhf framework. _arXiv:2405.11143_, 2024.
|
| 322 |
+
* Huang et al. (2024) Huang, Q., Dong, X., Zhang, P., Wang, B., He, C., Wang, J., Lin, D., Zhang, W., and Yu, N. OPERA: Alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2024.
|
| 323 |
+
* Ji et al. (2023) Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y.J., Madotto, A., and Fung, P. Survey of hallucination in natural language generation. _ACM Computing Surveys_, 55(12):1–38, 2023.
|
| 324 |
+
* Kembhavi et al. (2016) Kembhavi, A., Salvato, M., Kolve, E., Seo, M., Hajishirzi, H., and Farhadi, A. A diagram is worth a dozen images. In _Proceedings of the European Conference on Computer Vision_. Springer International Publishing, 2016.
|
| 325 |
+
* Krishna et al. (2017) Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D.A., Bernstein, M.S., and Fei-Fei, L. Visual Genome: Connecting language and vision using crowdsourced dense image annotations. _International Journal of Computer Vision_, 123(1):32–73, 2017.
|
| 326 |
+
* Leng et al. (2024) Leng, S., Zhang, H., Chen, G., Li, X., Lu, S., Miao, C., and Bing, L. Mitigating object hallucinations in large vision-language models through visual contrastive decoding. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. IEEE, 2024.
|
| 327 |
+
* Li et al. (2023) Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In _Proceedings of the International Conference on Machine Learning_. PMLR, 2023.
|
| 328 |
+
* Lin (2004) Lin, C.-Y. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_. Association for Computational Linguistics, 2004.
|
| 329 |
+
* Liu et al. (2024a) Liu, F., Lin, K., Li, L., Wang, J., Yacoob, Y., and Wang, L. Mitigating hallucination in large multi-modal models via robust instruction tuning. In _Proceedings of the International Conference on Learning Representations_, 2024a.
|
| 330 |
+
* Liu et al. (2023) Liu, H., Li, C., Wu, Q., and Lee, Y.J. Visual instruction tuning. In _Proceedings of the Advances in Neural Information Processing Systems_, 2023.
|
| 331 |
+
* Liu et al. (2024b) Liu, H., Li, C., Li, Y., and Lee, Y.J. Improved baselines with visual instruction tuning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2024b.
|
| 332 |
+
* Liu et al. (2024c) Liu, X., Wang, W., Yuan, Y., Huang, J.-t., Liu, Q., He, P., and Tu, Z. Insight over sight? exploring the vision-knowledge conflicts in multimodal llms. _arXiv:2410.08145_, 2024c.
|
| 333 |
+
* Liu et al. (2024d) Liu, Y., Li, Z., Huang, M., Yang, B., Yu, W., Li, C., Yin, X.-C., Liu, C.-L., Jin, L., and Bai, X. OCRBench: on the hidden mystery of ocr in large multimodal models. _Science China Information Sciences_, 67(12):220102, 2024d.
|
| 334 |
+
* Liu et al. (2025) Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C., Liu, Z., Chen, K., and Lin, D. MMBench: Is your multi-modal model an all-around player? In _Proceedings of the European Conference on Computer Vision_. Springer Nature Switzerland, 2025.
|
| 335 |
+
* Longpre et al. (2021) Longpre, S., Perisetla, K., Chen, A., Ramesh, N., DuBois, C., and Singh, S. Entity-based knowledge conflicts in question answering. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 2021.
|
| 336 |
+
* Lu et al. (2024) Lu, P., Bansal, H., Xia, T., Liu, J., Li, C., Hajishirzi, H., Cheng, H., Chang, K.-W., Galley, M., and Gao, J. MATHVISTA: Evaluating mathematical reason- ing of foundation models in visual contexts. In _Proceedings of the International Conference on Learning Representations_, 2024.
|
| 337 |
+
* Ouyang et al. (2022) Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. Training language models to follow instructions with human feedback. In _Proceedings of the Advances in Neural Information Processing Systems_, 2022.
|
| 338 |
+
* Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. _arXiv:1707.06347_, 2017.
|
| 339 |
+
* Shu et al. (2025) Shu, D., Zhao, H., Hu, J., Liu, W., Cheng, L., and Du, M. Large vision-language model alignment and misalignment: A survey through the lens of explainability. _arXiv:2501.01346_, 2025.
|
| 340 |
+
* Stiennon et al. (2020) Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P.F. Learning to summarize with human feedback. In _Proceedings of the Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2020.
|
| 341 |
+
* Wang et al. (2024) Wang, P., Bai, S., Tan, S., Wang, S., Fan, Z., Bai, J., Chen, K., Liu, X., Wang, J., Ge, W., Fan, Y., Dang, K., Du, M., Ren, X., Men, R., Liu, D., Zhou, C., Zhou, J., and Lin, J. Qwen2-VL: Enhancing vision-language model’s perception of the world at any resolution. _arXiv:2409.12191_, 2024.
|
| 342 |
+
* Williams (1992) Williams, R.J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine Learning_, 8:229–256, 1992.
|
| 343 |
+
* Xie et al. (2024) Xie, J., Zhang, K., Chen, J., Lou, R., and Su, Y. Adaptive chameleon or stubborn sloth: Revealing the behavior of large language models in knowledge conflicts. In _Proceedings of the International Conference on Learning Representations_, 2024.
|
| 344 |
+
* Xu et al. (2024) Xu, R., Qi, Z., Guo, Z., Wang, C., Wang, H., Zhang, Y., and Xu, W. Knowledge conflicts for LLMs: A survey. In _Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 2024.
|
| 345 |
+
* Yu et al. (2024a) Yu, Q., Li, J., Wei, L., Pang, L., Ye, W., Qin, B., Tang, S., Tian, Q., and Zhuang, Y. HalluciDoctor: Mitigating hallucinatory toxicity in visual instruction data. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. IEEE, 2024a.
|
| 346 |
+
* Yu et al. (2024b) Yu, T., Yao, Y., Zhang, H., He, T., Han, Y., Cui, G., Hu, J., Liu, Z., Zheng, H.-T., and Sun, M. RLHF-V: Towards trustworthy MLLMs via behavior alignment from fine-grained correctional human feedback. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. IEEE, 2024b.
|
| 347 |
+
* Yu et al. (2024c) Yu, W., Yang, Z., Li, L., Wang, J., Lin, K., Liu, Z., Wang, X., and Wang, L. MM-Vet: Evaluating large multimodal models for integrated capabilities. In _Proceedings of the International Conference on Machine Learning_, 2024c.
|
| 348 |
+
* Yue et al. (2024) Yue, X., Ni, Y., Zheng, T., Zhang, K., Liu, R., Zhang, G., Stevens, S., Jiang, D., Ren, W., Sun, Y., Wei, C., Yu, B., Yuan, R., Sun, R., Yin, M., Zheng, B., Yang, Z., Liu, Y., Huang, W., Sun, H., Su, Y., and Chen, W. MMMU: A massive multi-discipline multimodal understanding and reasoning benchmark for expert AGI. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. IEEE, 2024.
|
| 349 |
+
* Zhang et al. (2024) Zhang, Z., Shi, Y., Zhu, J., Zhou, W., Qi, X., Zhang, P., and Li, H. Trustworthy alignment of retrieval-augmented large language models via reinforcement learning. In _Proceedings of the International Conference on Machine Learning_. PMLR, 2024.
|
| 350 |
+
* Zhao et al. (2024) Zhao, Z., Wang, B., Ouyang, L., Dong, X., Wang, J., and He, C. Beyond hallucinations: Enhancing lvlms through hallucination-aware direct preference optimization. _arXiv:2311.16839_, 2024.
|
| 351 |
+
* Zheng et al. (2023) Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E.P., Zhang, H., Gonzalez, J.E., and Stoica, I. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. In _Advances in Neural Information Processing Systems Track on Datasets and Benchmarks_. Curran Associates, Inc., 2023.
|
| 352 |
+
* Zhou et al. (2023) Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y., Ma, X., Efrat, A., Yu, P., Yu, L., Zhang, S., Ghosh, G., Lewis, M., Zettlemoyer, L., and Levy, O. LIMA: Less is more for alignment. In _Proceedings of the Advances in Neural Information Processing Systems_, 2023.
|
| 353 |
+
* Zhu et al. (2024) Zhu, T., Liu, Q., Wang, F., Tu, Z., and Chen, M. Unraveling cross-modality knowledge conflicts in large vision-language models. _arXiv:2410.03659_, 2024.
|
| 354 |
+
|
| 355 |
+
Appendix A Visualization of MMMC
|
| 356 |
+
--------------------------------
|
| 357 |
+
|
| 358 |
+

|
| 359 |
+
|
| 360 |
+
Figure 5: Distribution of conflict types.
|
| 361 |
+
|
| 362 |
+

|
| 363 |
+
|
| 364 |
+
Figure 6: Word cloud visualization of MMMC. We separately visualize distribution of words in questions and answers from different conflict types.
|
| 365 |
+
|
| 366 |
+
Appendix B Prompts
|
| 367 |
+
------------------
|
| 368 |
+
|
| 369 |
+
### B.1 Prompts for Data Construction
|
| 370 |
+
|
| 371 |
+
#### B.1.1 Key Component Detection
|
| 372 |
+
|
| 373 |
+
#### B.1.2 Components Substitution
|
| 374 |
+
|
| 375 |
+
#### B.1.3 Answer Generation
|
| 376 |
+
|
| 377 |
+
### B.2 Prompts for Method
|
| 378 |
+
|
| 379 |
+
#### B.2.1 Prompt for Prompt Engineering
|
| 380 |
+
|
| 381 |
+
#### B.2.2 Prompt for Reward Model in RL
|
| 382 |
+
|
| 383 |
+
### B.3 Prompts for Evaluation
|
| 384 |
+
|
| 385 |
+
#### B.3.1 Hallucination Rate
|
| 386 |
+
|
| 387 |
+
#### B.3.2 LLM Judge
|
| 388 |
+
|
| 389 |
+
Appendix C Performance on Separate Conflict Types
|
| 390 |
+
-------------------------------------------------
|
| 391 |
+
|
| 392 |
+
Table 4: Performance comparison of different methods on the object-conflict subset of the MMMC dataset.
|
| 393 |
+
|
| 394 |
+
Model Method ROUGE-L (%) ↑↑\uparrow↑Hallu-Rate (%) ↓↓\downarrow↓ (Llama)Hallu-Rate (%) ↓↓\downarrow↓ (GPT)LLM-Judge ↑↑\uparrow↑ (Llama)LLM-Judge ↑↑\uparrow↑ (GPT)
|
| 395 |
+
GPT-4o Base 22.98 51.03 49.89 2.19 2.49
|
| 396 |
+
PE 23.17 52.40 49.50 2.19 2.50
|
| 397 |
+
InstructBLIP-7B Base 12.64 79.86 66.06 1.82 1.86
|
| 398 |
+
PE 12.64 79.02 65.37 1.82 1.86
|
| 399 |
+
SFT 8.41 (0.38)83.17 (0.10)66.72 (0.80)1.82(0.01)1.75 (0.06)
|
| 400 |
+
RL 4.98 (2.82)53.57(21.38)53.39(20.77)1.01 (0.69)1.50 (0.97)
|
| 401 |
+
LLaVA-v1.5-7B Base 26.38 91.38 81.16 1.72 1.80
|
| 402 |
+
PE 23.64 83.30 81.31 1.95 1.93
|
| 403 |
+
SFT 19.13 (0.59)50.88 (0.88)44.37 (1.34)2.40 (0.02)2.38 (0.02)
|
| 404 |
+
RL 20.76 (4.11)24.54(2.33)21.56(1.75)2.69(0.05)2.85(0.04)
|
| 405 |
+
LLaVA-NeXT-7B Base 17.14 62.62 60.87 1.98 2.33
|
| 406 |
+
PE 20.27 41.27 41.72 2.51 2.81
|
| 407 |
+
SFT 24.99(0.06)36.59 (0.41)34.02 (0.53)2.62 (0.01)2.56 (0.05)
|
| 408 |
+
RL 23.88 (2.01)23.90(1.53)22.20(1.80)2.79(0.01)3.02(0.06)
|
| 409 |
+
Qwen2-VL-Instruct-2B Base 22.78 39.59 34.40 2.15 2.35
|
| 410 |
+
PE 27.52 54.69 52.71 2.33 2.50
|
| 411 |
+
SFT 29.87(0.17)21.66 (0.72)26.49 (0.79)2.81 (0.02)2.84 (0.02)
|
| 412 |
+
RL 21.45 (1.12)11.87(4.09)11.14(3.24)2.81(0.05)3.08(0.07)
|
| 413 |
+
Qwen2-VL-Instruct-7B Base 22.73 45.31 42.26 2.31 2.54
|
| 414 |
+
PE 26.77 33.49 31.50 2.58 2.89
|
| 415 |
+
SFT 29.68(0.09)23.37 (0.47)25.04 (0.47)2.80(0.01)2.81 (0.02)
|
| 416 |
+
RL 19.68 (0.68)15.94(4.16)13.63(3.91)2.76 (0.05)3.00(0.08)
|
| 417 |
+
|
| 418 |
+
Table 5: Performance comparison of different methods on the attribute-conflict subset of the MMMC dataset.
|
| 419 |
+
|
| 420 |
+
Model Method ROUGE-L (%) ↑↑\uparrow↑Hallu-Rate (%) ↓↓\downarrow↓ (Llama)Hallu-Rate (%) ↓↓\downarrow↓ (GPT)LLM-Judge ↑↑\uparrow↑ (Llama)LLM-Judge ↑↑\uparrow↑ (GPT)
|
| 421 |
+
GPT-4o Base 24.96 69.50 66.04 2.02 2.24
|
| 422 |
+
PE 24.87 70.13 66.98 2.04 2.31
|
| 423 |
+
InstructBLIP-7B Base 12.96 85.22 77.04 1.77 1.81
|
| 424 |
+
PE 12.96 84.91 76.73 1.71 1.82
|
| 425 |
+
SFT 10.26 (0.49)89.20 (0.30)76.94 (1.55)1.77(0.01)1.73 (0.04)
|
| 426 |
+
RL 6.28 (3.37)59.22(16.62)59.01(18.12)0.94 (0.67)1.41 (0.95)
|
| 427 |
+
LLaVA-v1.5-7B Base 33.07 97.17 88.05 1.75 1.80
|
| 428 |
+
PE 28.81 93.08 91.51 1.92 1.90
|
| 429 |
+
SFT 16.14 (0.74)69.92 (1.71)62.79 (0.90)2.12 (0.01)2.11 (0.01)
|
| 430 |
+
RL 26.76 (3.79)39.10(4.77)33.44(2.98)2.53(0.08)2.69(0.09)
|
| 431 |
+
LLaVA-NeXT-7B Base 20.03 78.62 76.10 1.80 2.14
|
| 432 |
+
PE 21.53 59.43 55.03 2.34 2.58
|
| 433 |
+
SFT 20.56 (0.12)58.07 (0.90)53.88 (1.65)2.31 (0.01)2.27 (0.03)
|
| 434 |
+
RL 28.55(1.70)44.13(2.53)39.73(2.75)2.55(0.01)2.69(0.07)
|
| 435 |
+
Qwen2-VL-Instruct-2B Base 29.89 55.66 49.37 2.00 2.20
|
| 436 |
+
PE 34.83 72.01 68.87 2.16 2.28
|
| 437 |
+
SFT 31.06 (0.09)28.09 (0.15)33.12 (2.24)2.75(0.02)2.77 (0.01)
|
| 438 |
+
RL 25.50 (2.09)21.91(5.10)20.55(4.52)2.66 (0.09)2.88(0.07)
|
| 439 |
+
Qwen2-VL-Instruct-7B Base 28.61 59.43 54.09 2.26 2.48
|
| 440 |
+
PE 31.47 45.60 44.34 2.51 2.76
|
| 441 |
+
SFT 30.25 (0.05)27.04(0.77)31.13 (0.68)2.74(0.01)2.77(0.00)
|
| 442 |
+
RL 18.96 (1.42)31.03 (7.55)26.31(6.46)2.56 (0.11)2.70 (0.16)
|
| 443 |
+
|
| 444 |
+
Table 6: Performance comparison of different methods on the relationship-conflict subset of the MMMC dataset.
|
| 445 |
+
|
| 446 |
+
Model Method ROUGE-L (%) ↑↑\uparrow↑Hallu-Rate (%) ↓↓\downarrow↓ (Llama)Hallu-Rate (%) ↓↓\downarrow↓ (GPT)LLM-Judge ↑↑\uparrow↑ (Llama)LLM-Judge ↑↑\uparrow↑ (GPT)
|
| 447 |
+
GPT-4o Base 25.46 80.32 74.39 1.98 2.19
|
| 448 |
+
PE 26.11 78.71 74.66 2.01 2.23
|
| 449 |
+
InstructBLIP-7B Base 19.10 87.33 80.86 1.79 1.85
|
| 450 |
+
PE 19.10 91.64 77.90 1.78 1.87
|
| 451 |
+
SFT 9.27 (0.61)90.48 (1.47)77.45 (0.55)1.81(0.02)1.83 (0.03)
|
| 452 |
+
RL 7.47 (3.77)70.53(12.39)69.00(10.27)1.05 (0.69)1.46 (0.92)
|
| 453 |
+
LLaVA-v1.5-7B Base 32.28 96.50 88.41 1.74 1.83
|
| 454 |
+
PE 31.01 94.61 90.84 1.89 1.92
|
| 455 |
+
SFT 9.68 (0.11)80.32 (1.01)71.25 (1.59)1.96 (0.02)2.02 (0.00)
|
| 456 |
+
RL 30.57 (1.70)62.35(1.78)55.71(2.45)2.24(0.03)2.39(0.01)
|
| 457 |
+
LLaVA-NeXT-7B Base 19.74 86.79 80.86 1.81 2.03
|
| 458 |
+
PE 22.65 75.47 74.93 2.20 2.36
|
| 459 |
+
SFT 14.01 (0.37)68.55 (1.04)64.51 (0.99)2.12 (0.01)2.14 (0.05)
|
| 460 |
+
RL 28.71(0.68)60.11(3.32)56.06(2.38)2.27(0.04)2.45(0.05)
|
| 461 |
+
Qwen2-VL-Instruct-2B Base 29.75 63.34 54.72 1.85 1.99
|
| 462 |
+
PE 35.25 79.78 77.90 2.10 2.12
|
| 463 |
+
SFT 25.89 (0.66)44.12 (0.71)54.72 (1.01)2.35 (0.04)2.49 (0.04)
|
| 464 |
+
RL 24.44 (3.26)36.30(9.21)33.51(8.02)2.49(0.09)2.66(0.15)
|
| 465 |
+
Qwen2-VL-Instruct-7B Base 28.50 71.16 62.80 2.05 2.21
|
| 466 |
+
PE 32.90 58.76 52.02 2.33 2.52
|
| 467 |
+
SFT 23.37 (0.23)48.34 (0.25)57.41 (3.24)2.36 (0.01)2.47 (0.03)
|
| 468 |
+
RL 16.06 (0.96)43.85(9.27)39.53(8.14)2.39(0.09)2.52 (0.14)
|
| 469 |
+
|
| 470 |
+
Appendix D Examples
|
| 471 |
+
-------------------
|
| 472 |
+
|
| 473 |
+
### D.1 Object Conflict
|
| 474 |
+
|
| 475 |
+
![Image 8: [Uncaptioned image]](https://arxiv.org/html/2507.07151v1/extracted/6608778/figures/example_object1.png)
|
| 476 |
+
Question How many fins does the fish have?
|
| 477 |
+
Answer The image does not contain any fish or fins.
|
| 478 |
+
GPT-4o, Base The image does not show a fish. It shows a hummingbird feeding on some flowers.
|
| 479 |
+
GPT-4o, PE The image shows a hummingbird, not a fish. Therefore, it does not have any fins. Birds have wings, which are visible in the image.
|
| 480 |
+
InstructBLIP-7B, Base 0
|
| 481 |
+
InstructBLIP-7B, PE 0
|
| 482 |
+
InstructBLIP-7B, SFT 0
|
| 483 |
+
InstructBLIP-7B, RL was: Picture of a Bird and a Fish This photo does not show a fish, so there are no fish fins in this photo. This photo only shows a bird, and it does not show any fish. This photo does not show any fish, so there are no fish fins in this photo. This photo does not show any fish, so there are no fish fins in this photo. This photo does not show any fish, so there are no fish fins in this photo. This photo does not show any fish, so there are no fish fins in this photo. This photo does not show any fish, so there are no fish
|
| 484 |
+
LLaVA-NeXT-7B, Base The image you’ve provided appears to show a hummingbird feeding on a flower. Hummingbirds do not have fins like fish do. They have a unique set of wings that allow them to hover and fly in a way that is distinct from other birds.
|
| 485 |
+
LLaVA-NeXT-7B, PE The image you’ve provided appears to show a hummingbird feeding on a flower. Hummingbirds do not have fins like fish do. They are small birds known for their ability to hover in mid-air and fly backward.
|
| 486 |
+
LLaVA-NeXT-7B, SFT The image does not contain a fish.
|
| 487 |
+
LLaVA-NeXT-7B, RL No fish.
|
| 488 |
+
Qwen2-VL-Instruct-2B, Base There is no existence of a fish in the image description.
|
| 489 |
+
Qwen2-VL-Instruct-2B, PE There is no existence of a fish in the image, so asking about the number of fins for a fish is misleading.
|
| 490 |
+
Qwen2-VL-Instruct-2B, SFT The image does not contain a fish.
|
| 491 |
+
Qwen2-VL-Instruct-2B, RL There is no existence of a fish or fins in the image description.
|
| 492 |
+
Qwen2-VL-Instruct-7B, Base There is no existence of a fish in the image description.
|
| 493 |
+
Qwen2-VL-Instruct-7B, PE There is no existence of a fish in the image.
|
| 494 |
+
Qwen2-VL-Instruct-7B, SFT The image does not contain a fish.
|
| 495 |
+
Qwen2-VL-Instruct-7B, RL There is no existence of a fish in the image description.
|
| 496 |
+
|
| 497 |
+
### D.2 Attribute Conflict
|
| 498 |
+
|
| 499 |
+
![Image 9: [Uncaptioned image]](https://arxiv.org/html/2507.07151v1/extracted/6608778/figures/example_attribute1.png)
|
| 500 |
+
Question What is behind the blue trolley?
|
| 501 |
+
Answer The image does not contain a blue trolley.
|
| 502 |
+
GPT-4o, Base Behind the blue trolley, there are several cyclists riding on the street. Additionally, there is an orange trolley, some trees, street signs, and the facades of various buildings.
|
| 503 |
+
GPT-4o, PE Behind the blue trolley, there is a building with a distinctive sloped roof located across the intersection, and a few cyclists riding on the street. Additionally, tram tracks and cyclists are visible on the road, and some signage indicating a ”PARK” can be seen on the left side of the image.
|
| 504 |
+
InstructBLIP-7B, Base a bus
|
| 505 |
+
InstructBLIP-7B, PE a bus
|
| 506 |
+
InstructBLIP-7B, SFT A building.
|
| 507 |
+
InstructBLIP-7B, RL was: Picture of People Biking and Trolley This photo does not show what is behind the blue trolley. This photo only shows what is in front of the blue trolley. There is no information about what is behind the blue trolley. This photo does not show what is behind the blue trolley. This photo only shows what is in front of the blue trolley. There is no information about what is behind the blue trolley. This photo does not show what is behind the blue trolley. This photo only shows what is in front of the blue trolley.
|
| 508 |
+
LLaVA-NeXT-7B, Base Behind the blue trolley, there is a group of cyclists riding on the street. The cyclists are wearing helmets and appear to be in motion, following the trolley. The scene suggests a busy urban environment with various modes of transportation sharing the road.
|
| 509 |
+
LLaVA-NeXT-7B, PE The image shows a group of cyclists riding on a street with a blue trolley in the background. The trolley is not directly behind the cyclists; it is further down the street, and the cyclists are closer to the camera. The cyclists are wearing helmets and appear to be in motion, suggesting they are actively riding their bicycles. The street has various markings and signs, indicating it is a busy urban area with traffic regulations.
|
| 510 |
+
LLaVA-NeXT-7B, SFT Buildings.
|
| 511 |
+
LLaVA-NeXT-7B, RL There is no blue trolley in the image.
|
| 512 |
+
Qwen2-VL-Instruct-2B, Base behind the blue trolley.(143,297),(448,530)
|
| 513 |
+
Qwen2-VL-Instruct-2B, PE There is a bus behind the blue trolley in the image.
|
| 514 |
+
Qwen2-VL-Instruct-2B, SFT The image does not contain a blue trolley.
|
| 515 |
+
Qwen2-VL-Instruct-2B, RL There is no existence of a blue trolley in the image description.
|
| 516 |
+
Qwen2-VL-Instruct-7B, Base There is no existence of a blue trolley in the image.
|
| 517 |
+
Qwen2-VL-Instruct-7B, PE There is no existence of a blue trolley in the image.
|
| 518 |
+
Qwen2-VL-Instruct-7B, SFT The image does not contain a blue trolley.
|
| 519 |
+
Qwen2-VL-Instruct-7B, RL There is no existence of a blue trolley in the image description.
|
| 520 |
+
|
| 521 |
+
### D.3 Relationship Conflict
|
| 522 |
+
|
| 523 |
+
![Image 10: [Uncaptioned image]](https://arxiv.org/html/2507.07151v1/extracted/6608778/figures/example_relation1.png)
|
| 524 |
+
Question What is growing inside of the shed?
|
| 525 |
+
Answer The image does not show anything growing inside of the shed.
|
| 526 |
+
GPT-4o, Base I’m sorry, but I can’t determine what’s growing inside the shed from this image.
|
| 527 |
+
GPT-4o, PE I’m sorry, but I can’t determine what is growing inside the shed from the image provided.
|
| 528 |
+
InstructBLIP-7B, Base There is no shed in the picture.
|
| 529 |
+
InstructBLIP-7B, PE There is no shed in the picture.
|
| 530 |
+
InstructBLIP-7B, SFT Nothing.
|
| 531 |
+
InstructBLIP-7B, RL was: Picture of a Shed with a Fire Hydrant Nearby This picture does not show any growth inside of the shed. There is no growth inside of the shed. This picture only shows the shed and the fire hydrant. There is no growth inside of the shed. This picture does not show any growth inside of the shed. This picture only shows the shed and the fire hydrant. There is no growth inside of the shed. This picture does not show any growth inside of the shed. This picture only shows the shed and the fire hydrant. There is no growth inside of the shed. This picture does not
|
| 532 |
+
LLaVA-NeXT-7B, Base Inside the shed, there appears to be a plant with long, slender leaves that could be a type of grass or sedge. The plant is growing in the corner of the shed, where the wall meets the floor. It’s not clear from the image whether the plant is indigenous to the area or if it was intentionally planted there.
|
| 533 |
+
LLaVA-NeXT-7B, PE The image shows a shed with a corrugated metal roof and a wooden structure. There is a fire hydrant in the foreground, and the shed appears to be in a natural setting with vegetation around it. There is no visible growth inside the shed, as it is an inanimate structure.
|
| 534 |
+
LLaVA-NeXT-7B, SFT Buildings.
|
| 535 |
+
LLaVA-NeXT-7B, RL There doesn’t appear to be anything growing inside the shed. It looks like a typical shed with no plants or vegetation inside.
|
| 536 |
+
Qwen2-VL-Instruct-2B, Base There is no existence of any plants or vegetation inside the shed in the image description.
|
| 537 |
+
Qwen2-VL-Instruct-2B, PE There is no existence of any plants or vegetation growing inside the shed in the image.
|
| 538 |
+
Qwen2-VL-Instruct-2B, SFT The image does not contain any information about what is growing inside the shed.
|
| 539 |
+
Qwen2-VL-Instruct-2B, RL There is no existence of any plants or growth inside the shed in the image description.
|
| 540 |
+
Qwen2-VL-Instruct-7B, Base It is ambiguous what is growing inside of the shed. It could be grass, weeds, or plants.
|
| 541 |
+
Qwen2-VL-Instruct-7B, PE There is no existence of any plants or growth inside the shed in the image.
|
| 542 |
+
Qwen2-VL-Instruct-7B, SFT Weeds.
|
| 543 |
+
Qwen2-VL-Instruct-7B, RL There is no existence of any plants or growth inside the shed in the given image information.
|