Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF Paper • 2401.16335 • Published Jan 29 • 1
Towards Efficient and Exact Optimization of Language Model Alignment Paper • 2402.00856 • Published Feb 1
Preference-free Alignment Learning with Regularized Relevance Reward Paper • 2402.03469 • Published Feb 2
Teaching Large Language Models to Reason with Reinforcement Learning Paper • 2403.04642 • Published Mar 7 • 43
RewardBench: Evaluating Reward Models for Language Modeling Paper • 2403.13787 • Published Mar 20 • 18
PERL: Parameter Efficient Reinforcement Learning from Human Feedback Paper • 2403.10704 • Published Mar 15 • 55
Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Paper • 2403.03950 • Published Mar 6 • 11
In deep reinforcement learning, a pruned network is a good network Paper • 2402.12479 • Published Feb 19 • 16
Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences Paper • 2404.03715 • Published Apr 4 • 58
Offline Regularised Reinforcement Learning for Large Language Models Alignment Paper • 2405.19107 • Published 4 days ago • 7