GenuineWWD commited on
Commit
87f9672
·
verified ·
1 Parent(s): 55031bb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [NeurIPS 2025] Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling
2
+
3
+ **A simple, general sampling method for RLVR with multi-choice dataset to solve unfaithful reasoning phenomenon!**
4
+
5
+ # SCS Resouces
6
+
7
+ [**📖 Paper**]() | [**🤗 Dataset**](https://huggingface.co/datasets/GenuineWWD/SCS_data) | [**💻 Code**](https://github.com/GenuineWWD/SCS)
8
+
9
+
10
+ ## 🔔News
11
+ - **🔥[2025-11-9] Release the eval codes! 🚀**
12
+ - **🔥[2025-10-13] Release the dataset the codes! 🚀**
13
+ - **🔥[2025-9-17] Our SCS paper is accepted by NeurIPS 2025! 🚀**
14
+
15
+ ## To-do
16
+ - [x] Release the eval codes
17
+
18
+ ## 📖 Introduction
19
+ **Self‑Consistency Sampling (SCS)** improves outcome‑reward reinforcement learning for multimodal large language models (MLLMs). In multiple‑choice reasoning tasks, models often get the correct answer through faulty reasoning and receive unmerited rewards. SCS mitigates this by introducing visual perturbations and repeated resampling of reasoning trajectories, rewarding only consistent reasoning paths. Integrated into methods like RLOO, GRPO, and REINFORCE++, SCS boosts accuracy by up to **7.7%** on six multimodal benchmarks with minimal extra cost, and generalizes across models including **Qwen2.5‑VL** and **InternVL3**.
20
+ ![Overview](assets/overview2.png)
21
+
22
+ ## Training
23
+ Please refer to [code repo](https://github.com/GenuineWWD/SCS) for more details.
24
+
25
+ ## Evaluation
26
+ Please refer to [code repo](https://github.com/GenuineWWD/SCS) for more details.
27
+
28
+ ## Contact
29
+ - Jiahao Wang: wjhwdscience@stu.xjtu.edu.cn
30
+ - Weiye Xu: ustcxwy0271@mail.ustc.edu.cn
31
+
32
+ ## Citation
33
+
34
+ **BibTeX:**
35
+ ```bibtex
36
+ @inproceedings{wang2025scs,
37
+ title={Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling},
38
+ author={Wang, Jiahao and Xu, Weiye and Yang, Aijun and Zhou, Wengang and Lu, Lewei and Li, Houqiang and Wang, Xiaohua and Zhu, Jinguo},
39
+ booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
40
+ year={2025}
41
+ }
42
+ ```