LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its Hybrid Paper • 2502.07563 • Published Feb 11 • 24
You Only Scan Once: Efficient Multi-dimension Sequential Modeling with LightNet Paper • 2405.21022 • Published May 31, 2024
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme Paper • 2504.02587 • Published 20 days ago • 30
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme Paper • 2504.02587 • Published 20 days ago • 30
CO2: Efficient Distributed Training with Full Communication-Computation Overlap Paper • 2401.16265 • Published Jan 29, 2024 • 1
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention Paper • 2405.17381 • Published May 27, 2024
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23
MiniMax-01: Scaling Foundation Models with Lightning Attention Paper • 2501.08313 • Published Jan 14 • 286
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23