view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention By sirluk • Oct 7, 2024 • 20
BASE TTS: Lessons from building a billion-parameter Text-to-Speech model on 100K hours of data Paper • 2402.08093 • Published Feb 12, 2024 • 60
Running on CPU Upgrade 5.02k 5.02k MTEB Leaderboard 🥇 Select benchmarks and languages for text embeddings evaluation