-
Suppressing Pink Elephants with Direct Principle Feedback
Paper • 2402.07896 • Published • 9 -
Policy Improvement using Language Feedback Models
Paper • 2402.07876 • Published • 5 -
Direct Language Model Alignment from Online AI Feedback
Paper • 2402.04792 • Published • 27 -
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Paper • 2401.01335 • Published • 64
Collections
Discover the best community collections!
Collections including paper arxiv:2406.18629
-
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 70 -
Learning From Mistakes Makes LLM Better Reasoner
Paper • 2310.20689 • Published • 28 -
Let's Verify Step by Step
Paper • 2305.20050 • Published • 9 -
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Paper • 2308.00436 • Published • 21
-
Trusted Source Alignment in Large Language Models
Paper • 2311.06697 • Published • 10 -
Diffusion Model Alignment Using Direct Preference Optimization
Paper • 2311.12908 • Published • 47 -
SuperHF: Supervised Iterative Learning from Human Feedback
Paper • 2310.16763 • Published • 1 -
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
Paper • 2311.15657 • Published • 2