RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response Paper • 2412.14922 • Published 25 days ago • 85
Law of the Weakest Link: Cross Capabilities of Large Language Models Paper • 2409.19951 • Published Sep 30, 2024 • 54
Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale Paper • 2409.17115 • Published Sep 25, 2024 • 61
Instruction Following without Instruction Tuning Paper • 2409.14254 • Published Sep 21, 2024 • 28
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning Paper • 2409.12183 • Published Sep 18, 2024 • 37
view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy Sep 18, 2024 • 216
view article Article Welcome FalconMamba: The first strong attention-free 7B model Aug 12, 2024 • 108