Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data
Abstract
Large multimodal models (LMMs) have shown impressive capabilities in a wide range of visual tasks. However, they often struggle with fine-grained visual reasoning, failing to identify domain-specific objectives and provide justifiable explanations for their predictions. To address this, we propose a novel visual rejection sampling framework to improve the cognition and explainability of LMMs using self-synthesized data. Specifically, visual fine-tuning requires images, queries, and target answers. Our approach begins by synthesizing interpretable answers that include human-verifiable visual features. These features are based on expert-defined concepts, carefully selected based on their alignment with the image content. After each round of fine-tuning, we apply a reward model-free filtering mechanism to select the highest-quality interpretable answers for the next round of tuning. This iterative process of data synthesis and fine-tuning progressively improves the model's ability to generate accurate and reasonable explanations. Experimental results demonstrate the effectiveness of our method in improving both the accuracy and explainability of specialized visual classification tasks.
Community
🏔️ The Challenge:
Modern AI systems excel at general tasks but often fall short when applied to specialized fields such as medical imaging, plant disease detection, or fine-grained species classification. Training with only image-label pairs tends to compromise the model’s ability to follow instructions and explain its decisions—an issue that becomes critical when precision and accountability are required.
đź“Ł Our Solution:
We introduce a novel framework where the model self-generates interpretable visual explanations through an iterative fine-tuning process. By leveraging self-synthesized data, our approach automatically extracts key visual features and produces expert-level, image-specific explanations. This method overcomes the limitations of conventional labeling (which often lacks detailed interpretability) while preserving the model’s general instruction-following capabilities.
🔥 Why It Matters:
Understanding not only what the model predicts but also why it makes those predictions is essential for building trust in AI systems, especially in high-stakes domains. Our work bridges the gap between performance and interpretability, enabling models to serve as true domain experts with transparent decision-making—paving the way for safer and more reliable AI applications.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the Wild (2025)
- Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models (2025)
- AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO (2025)
- SeFAR: Semi-supervised Fine-grained Action Recognition with Temporal Perturbation and Learning Stabilization (2025)
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment (2024)
- Dynamic Knowledge Integration for Enhanced Vision-Language Reasoning (2025)
- TaskGalaxy: Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper