SCS_data / README.md
GenuineWWD's picture
Update README.md
2ba6ed3 verified
metadata
license: apache-2.0
task_categories:
  - reinforcement-learning
language:
  - en

[NeurIPS 2025] Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling

A simple, general sampling method for RLVR with multi-choice dataset to solve unfaithful reasoning phenomenon!

SCS Resouces

📖 Paper | 🤗 Dataset | 💻 Code

🔔News

  • 🔥[2025-11-9] Release the eval codes! 🚀
  • 🔥[2025-10-13] Release the dataset the codes! 🚀
  • 🔥[2025-9-17] Our SCS paper is accepted by NeurIPS 2025! 🚀

To-do

  • Release the eval codes

📖 Introduction

Self‑Consistency Sampling (SCS) improves outcome‑reward reinforcement learning for multimodal large language models (MLLMs). In multiple‑choice reasoning tasks, models often get the correct answer through faulty reasoning and receive unmerited rewards. SCS mitigates this by introducing visual perturbations and repeated resampling of reasoning trajectories, rewarding only consistent reasoning paths. Integrated into methods like RLOO, GRPO, and REINFORCE++, SCS boosts accuracy by up to 7.7% on six multimodal benchmarks with minimal extra cost, and generalizes across models including Qwen2.5‑VL and InternVL3. Overview

Training

Please refer to code repo for more details.

Evaluation

Please refer to code repo for more details.

Contact

Citation

BibTeX:

@article{wang2025enhancing,
  title={Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling},
  author={Wang, Jiahao and Xu, Weiye and Yang, Aijun and Zhou, Wengang and Lu, Lewei and Li, Houqiang and Wang, Xiaohua and Zhu, Jinguo},
  journal={arXiv preprint arXiv:2511.10648},
  year={2025}
}