Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference Paper • 2401.08383 • Published Jan 16, 2024 • 1
The Case for Co-Designing Model Architectures with Hardware Paper • 2401.14489 • Published Jan 25, 2024 • 3
Continual Pre-Training of Large Language Models: How to (re)warm your model? Paper • 2308.04014 • Published Aug 8, 2023 • 2
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling Paper • 2304.01373 • Published Apr 3, 2023 • 9
GPT-NeoX-20B: An Open-Source Autoregressive Language Model Paper • 2204.06745 • Published Apr 14, 2022 • 1
BlackMamba: Mixture of Experts for State-Space Models Paper • 2402.01771 • Published Feb 1, 2024 • 24 • 5
BlackMamba: Mixture of Experts for State-Space Models Paper • 2402.01771 • Published Feb 1, 2024 • 24