Understanding the performance gap between online and offline alignment algorithms Paper • 2405.08448 • Published May 14 • 11
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment Paper • 2405.19332 • Published 24 days ago • 13
Offline Regularised Reinforcement Learning for Large Language Models Alignment Paper • 2405.19107 • Published 24 days ago • 12
Show, Don't Tell: Aligning Language Models with Demonstrated Feedback Paper • 2406.00888 • Published 20 days ago • 28
Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms Paper • 2406.02900 • Published 18 days ago • 10
BPO: Supercharging Online Preference Learning by Adhering to the Proximity of Behavior LLM Paper • 2406.12168 • Published 5 days ago • 7
Deep Bayesian Active Learning for Preference Modeling in Large Language Models Paper • 2406.10023 • Published 9 days ago • 2
Bootstrapping Language Models with DPO Implicit Rewards Paper • 2406.09760 • Published 9 days ago • 32