Running 2.24k 2.24k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness Paper • 2205.14135 • Published May 27, 2022 • 13
open-llm-leaderboard/Qwen__Qwen2.5-Math-7B-Instruct-details Viewer • Updated 29 days ago • 42.3k • 35 • 1
view article Article Preference Tuning LLMs with Direct Preference Optimization Methods Jan 18, 2024 • 48