Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11 • 23
Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms Paper • 2406.02900 • Published 8 days ago • 9
AgentGym: Evolving Large Language Model-based Agents across Diverse Environments Paper • 2406.04151 • Published 7 days ago • 12