DUMP: Automated Distribution-Level Curriculum Learning for RL-based LLM Post-training
Abstract
Recent advances in reinforcement learning (RL)-based post-training have led to notable improvements in large language models (LLMs), particularly in enhancing their reasoning capabilities to handle complex tasks. However, most existing methods treat the training data as a unified whole, overlooking the fact that modern LLM training often involves a mixture of data from diverse distributions-varying in both source and difficulty. This heterogeneity introduces a key challenge: how to adaptively schedule training across distributions to optimize learning efficiency. In this paper, we present a principled curriculum learning framework grounded in the notion of distribution-level learnability. Our core insight is that the magnitude of policy advantages reflects how much a model can still benefit from further training on a given distribution. Based on this, we propose a distribution-level curriculum learning framework for RL-based LLM post-training, which leverages the Upper Confidence Bound (UCB) principle to dynamically adjust sampling probabilities for different distrubutions. This approach prioritizes distributions with either high average advantage (exploitation) or low sample count (exploration), yielding an adaptive and theoretically grounded training schedule. We instantiate our curriculum learning framework with GRPO as the underlying RL algorithm and demonstrate its effectiveness on logic reasoning datasets with multiple difficulties and sources. Our experiments show that our framework significantly improves convergence speed and final performance, highlighting the value of distribution-aware curriculum strategies in LLM post-training. Code: https://github.com/ZhentingWang/DUMP.
Community
Recent advances in reinforcement learning (RL)-based post-training have led to
notable improvements in large language models (LLMs), particularly in enhancing
their reasoning capabilities to handle complex tasks. However, most existing
methods treat the training data as a unified whole, overlooking the fact that modern
LLM training often involves a mixture of data from diverse distributions—varying
in both source and difficulty. This heterogeneity introduces a key challenge: how
to adaptively schedule training across distributions to optimize learning efficiency.
In this paper, we present a principled curriculum learning framework grounded in
the notion of distribution-level learnability. Our core insight is that the magnitude
of policy advantages reflects how much a model can still benefit from further
training on a given distribution. Based on this, we propose a distribution-level
curriculum learning framework for RL-based LLM post-training, which leverages
the Upper Confidence Bound (UCB) principle to dynamically adjust sampling
probabilities for different distrubutions. This approach prioritizes distributions with
either high average advantage (exploitation) or low sample count (exploration),
yielding an adaptive and theoretically grounded training schedule. We instantiate
our curriculum learning framework with GRPO as the underlying RL algorithm and
demonstrate its effectiveness on logic reasoning datasets with multiple difficulties
and sources. Our experiments show that our framework significantly improves
convergence speed and final performance, highlighting the value of distributionaware curriculum strategies in LLM post-training. Code: https://github.com/ZhentingWang/DUMP.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Efficient Reinforcement Finetuning via Adaptive Curriculum Learning (2025)
- Online Difficulty Filtering for Reasoning Oriented Reinforcement Learning (2025)
- LLM Post-Training: A Deep Dive into Reasoning Large Language Models (2025)
- Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering (2025)
- A Unified Pairwise Framework for RLHF: Bridging Generative Reward Modeling and Policy Optimization (2025)
- OThink-MR1: Stimulating multimodal generalized reasoning capabilities via dynamic reinforcement learning (2025)
- Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper