-
LoRA+: Efficient Low Rank Adaptation of Large Models
Paper • 2402.12354 • Published • 6 -
The FinBen: An Holistic Financial Benchmark for Large Language Models
Paper • 2402.12659 • Published • 22 -
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization
Paper • 2402.13249 • Published • 13 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 70
Collections
Discover the best community collections!
Collections including paper arxiv:2404.09656
-
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 88 -
Aligning Teacher with Student Preferences for Tailored Training Data Generation
Paper • 2406.19227 • Published • 26 -
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 28 -
CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues
Paper • 2404.03820 • Published • 27
-
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39 -
Pandora: Towards General World Model with Natural Language Actions and Video States
Paper • 2406.09455 • Published • 15 -
WPO: Enhancing RLHF with Weighted Preference Optimization
Paper • 2406.11827 • Published • 15 -
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper • 2406.11194 • Published • 15
-
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 128 -
Multi-Head Mixture-of-Experts
Paper • 2404.15045 • Published • 61 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 257 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 88
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 94 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 20 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 27 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 27
-
UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 5 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 88 -
Natural Language Reinforcement Learning
Paper • 2411.14251 • Published • 31 -
Group Robust Preference Optimization in Reward-free RLHF
Paper • 2405.20304 • Published • 1
-
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 42 -
Advancing LLM Reasoning Generalists with Preference Trees
Paper • 2404.02078 • Published • 47 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 88 -
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 39