-
Moral Foundations of Large Language Models
Paper • 2310.15337 • Published • 1 -
Specific versus General Principles for Constitutional AI
Paper • 2310.13798 • Published • 2 -
Contrastive Prefence Learning: Learning from Human Feedback without RL
Paper • 2310.13639 • Published • 24 -
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Paper • 2309.00267 • Published • 47
Collections
Discover the best community collections!
Collections including paper arxiv:2309.06657
-
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 47 -
Towards Efficient and Exact Optimization of Language Model Alignment
Paper • 2402.00856 • Published -
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 14 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13
-
Training language models to follow instructions with human feedback
Paper • 2203.02155 • Published • 15 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 47 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 10
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 5 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 241
-
Efficient RLHF: Reducing the Memory Usage of PPO
Paper • 2309.00754 • Published • 13 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13 -
Aligning Large Multimodal Models with Factually Augmented RLHF
Paper • 2309.14525 • Published • 29 -
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Paper • 2309.10202 • Published • 9
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 82 -
Baichuan 2: Open Large-scale Language Models
Paper • 2309.10305 • Published • 19 -
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper • 2309.11495 • Published • 38 -
LMDX: Language Model-based Document Information Extraction and Localization
Paper • 2309.10952 • Published • 65
-
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13 -
In-Context Learning Creates Task Vectors
Paper • 2310.15916 • Published • 41 -
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Paper • 2404.08801 • Published • 63 -
Make Your LLM Fully Utilize the Context
Paper • 2404.16811 • Published • 52
-
Efficient RLHF: Reducing the Memory Usage of PPO
Paper • 2309.00754 • Published • 13 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13 -
Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
Paper • 2309.07462 • Published • 3 -
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Paper • 2309.10202 • Published • 9
-
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 13 -
Efficient Monotonic Multihead Attention
Paper • 2312.04515 • Published • 6 -
Layerwise Recurrent Router for Mixture-of-Experts
Paper • 2408.06793 • Published • 30 -
Scaling Up Diffusion and Flow-based XGBoost Models
Paper • 2408.16046 • Published • 8