-
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Paper • 1701.06538 • Published • 4 -
Sparse Networks from Scratch: Faster Training without Losing Performance
Paper • 1907.04840 • Published • 3 -
ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
Paper • 1910.02054 • Published • 3 -
A Mixture of h-1 Heads is Better than h Heads
Paper • 2005.06537 • Published • 2
Collections
Discover the best community collections!
Collections including paper arxiv:2401.04088
-
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Paper • 1701.06538 • Published • 4 -
Sparse Networks from Scratch: Faster Training without Losing Performance
Paper • 1907.04840 • Published • 3 -
ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
Paper • 1910.02054 • Published • 3 -
A Mixture of h-1 Heads is Better than h Heads
Paper • 2005.06537 • Published • 2
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 31 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
Attention Is All You Need
Paper • 1706.03762 • Published • 34
-
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 12 -
Attention Is All You Need
Paper • 1706.03762 • Published • 34 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 99
-
RARR: Researching and Revising What Language Models Say, Using Language Models
Paper • 2210.08726 • Published • 1 -
Hypothesis Search: Inductive Reasoning with Language Models
Paper • 2309.05660 • Published • 1 -
In-context Learning and Induction Heads
Paper • 2209.11895 • Published • 2 -
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 12
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 9 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 1
-
Mixtral of Experts
Paper • 2401.04088 • Published • 152 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 46 -
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
Paper • 2401.04081 • Published • 68 -
EdgeMoE: Fast On-Device Inference of MoE-based Large Language Models
Paper • 2308.14352 • Published
-
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 40 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 49 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 10 -
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published
-
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 90 -
How to Train Data-Efficient LLMs
Paper • 2402.09668 • Published • 33 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 17 -
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper • 2402.09727 • Published • 35
-
BlackMamba: Mixture of Experts for State-Space Models
Paper • 2402.01771 • Published • 22 -
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Paper • 2402.01739 • Published • 26 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 46 -
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Paper • 2401.06066 • Published • 35