LLM Alignment WARM: On the Benefits of Weight Averaged Reward Models Paper • 2401.12187 • Published Jan 22 • 17 Self-Rewarding Language Models Paper • 2401.10020 • Published Jan 18 • 143 Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11 • 25
Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11 • 25