Papers
arxiv:2503.12303

Towards Self-Improving Systematic Cognition for Next-Generation Foundation MLLMs

Published on Mar 16
· Submitted by PengDa02 on Mar 19
Authors:
,
,
,
,
,

Abstract

Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) face challenges with fine-grained perception and complex reasoning. Prevalent multimodal pre-training approaches focus on enhancing perception by training on high-quality image captions due to the extremely high cost of collecting chain-of-thought (CoT) reasoning data for improving reasoning. While leveraging advanced MLLMs for caption generation enhances scalability, the outputs often lack comprehensiveness and accuracy. In this paper, we introduce Self-Improving cognition (SIcog), a self-learning framework designed to construct next-generation foundation MLLMs by enhancing their systematic cognitive capabilities through multimodal pre-training with self-generated data. Specifically, we propose Chain-of-Description, an approach that improves an MLLM's systematic perception by enabling step-by-step visual understanding, ensuring greater comprehensiveness and accuracy. Additionally, we adopt a structured CoT reasoning technique to enable MLLMs to integrate in-depth multimodal reasoning. To construct a next-generation foundation MLLM with self-improved cognition, SIcog first equips an MLLM with systematic perception and reasoning abilities using minimal external annotations. The enhanced models then generate detailed captions and CoT reasoning data, which are further curated through self-consistency. This curated data is ultimately used for multimodal pre-training to develop next-generation foundation models. Extensive experiments on both low- and high-resolution MLLMs across diverse benchmarks demonstrate that, with merely 213K self-generated pre-training samples, SIcog produces next-generation foundation MLLMs with significantly improved cognition, achieving benchmark-leading performance compared to prevalent pre-training approaches.

Community

Paper author Paper submitter
This comment has been hidden (marked as Resolved)
Paper author Paper submitter
•
edited about 4 hours ago

framework.png
🎯 New Release | SIcog 🔥
We’re thrilled to introduce SIcog, a multimodal self-learning framework that revolutionizes multimodal pretraining by leveraging model-self-generated Chain-of-Thought and Chain-of- Description data! By integrating reasoning capabilities into multimodal pretraining, SIcog systematically enhances MLLMs’ cognitive abilities, offering groundbreaking insights into data synthesis and autonomous model improvement.

📂 Resources
Paper: https://arxiv.org/pdf/2503.12303
Code: https://github.com/thunlp/SICOG

Join us in advancing the frontier of autonomous multimodal intelligence with SIcog! Let’s redefine how machines perceive, reason, and evolve! 💥

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.12303 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.12303 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.12303 in a Space README.md to link it from this page.

Collections including this paper 2