id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2401.04088#0
Mixtral of Experts
4 2 0 2 n a J 8 ] G L . s c [ 1 v 8 8 0 4 0 . 1 0 4 2 : v i X r a # Mixtral of Experts Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed Abstract We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (i.e. experts). For every token, at each layer, a router network selects two experts to process the current state and combine their outputs. Even though each token only sees two experts, the selected experts can be different at each timestep. As a result, each token has access to 47B parameters, but only uses 13B active parameters during inference. Mixtral was trained with a context size of 32k tokens and it outperforms or matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks. We also provide a model fine- tuned to follow instructions, Mixtral 8x7B â Instruct, that surpasses GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B â chat model on human bench- marks. Both the base and instruct models are released under the Apache 2.0 license.
2401.04088#1
2401.04088
[ "1905.07830" ]
2401.04088#1
Mixtral of Experts
Code: https://github.com/mistralai/mistral-src Webpage: https://mistral.ai/news/mixtral-of-experts/ # Introduction In this paper, we present Mixtral 8x7B, a sparse mixture of experts model (SMoE) with open weights, licensed under Apache 2.0. Mixtral outperforms Llama 2 70B and GPT-3.5 on most benchmarks. As it only uses a subset of its parameters for every token, Mixtral allows faster inference speed at low batch-sizes, and higher throughput at large batch-sizes. Mixtral is a sparse mixture-of-experts network. It is a decoder-only model where the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the â
2401.04088#0
2401.04088#2
2401.04088
[ "1905.07830" ]
2401.04088#2
Mixtral of Experts
expertsâ ) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixture of Experts Layer i gating inputs af outputs router expert Figure 1: Mixture of Experts Layer. Each input vector is assigned to 2 of the 8 experts by a router. The layerâ s output is the weighted sum of the outputs of the two selected experts. In Mixtral, an expert is a standard feedforward block as in a vanilla transformer architecture. Mixtral demonstrates superior capabilities in mathematics, code generation, and tasks that require multilingual understanding, significantly outperforming Llama 2 70B in these domains. Experiments show that Mixtral is able to successfully retrieve information from its context window of 32k tokens, regardless of the sequence length and the location of the information in the sequence. We also present Mixtral 8x7B â
2401.04088#1
2401.04088#3
2401.04088
[ "1905.07830" ]
2401.04088#3
Mixtral of Experts
Instruct, a chat model fine-tuned to follow instructions using supervised fine-tuning and Direct Preference Optimization [25]. Its performance notably surpasses that of GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B â chat model on human evaluation benchmarks. Mixtral â Instruct also demonstrates reduced biases, and a more balanced sentiment profile in benchmarks such as BBQ, and BOLD. We release both Mixtral 8x7B and Mixtral 8x7B â Instruct under the Apache 2.0 license1, free for academic and commercial usage, ensuring broad accessibility and potential for diverse applications. To enable the community to run Mixtral with a fully open-source stack, we submitted changes to the vLLM project, which integrates Megablocks CUDA kernels for efficient inference. Skypilot also allows the deployment of vLLM endpoints on any instance in the cloud.
2401.04088#2
2401.04088#4
2401.04088
[ "1905.07830" ]
2401.04088#4
Mixtral of Experts
# 2 Architectural details Mixtral is based on a transformer architecture [31] and uses the same modifications as described in [18], with the notable exceptions that Mix- tral supports a fully dense context length of 32k tokens, and the feed- forward blocks are replaced by Mixture-of-Expert layers (Section 2.1). The model architecture parameters are summarized in Table 1. Parameter Value dim n_layers head_dim hidden_dim n_heads n_kv_heads context_len vocab_size num_experts top_k_experts # 2.1 Sparse Mixture of Experts We present a brief overview of the Mixture of Experts layer (Figure 1). For a more in-depth overview, see [12]. The output of the MoE module for a given input x is determined by the weighted sum of the outputs of the expert networks, where the weights are given by the gating networkâ s output. i.e. given n expert networks {E0, Ei, ..., Enâ 1}, the output of the expert layer is given by: Table 1: Model architecture.
2401.04088#3
2401.04088#5
2401.04088
[ "1905.07830" ]
2401.04088#5
Mixtral of Experts
# j nâ G(x)i · Ei(x). i=0 Here, G(x)i denotes the n-dimensional output of the gating network for the i-th expert, and Ei(x) is the output of the i-th expert network. If the gating vector is sparse, we can avoid computing the outputs of experts whose gates are zero. There are multiple alternative ways of implementing G(x) [6, 15, 35], but a simple and performant one is implemented by taking the softmax over the Top-K logits of a linear layer [28]. We use G(x) := Softmax(TopK(x · Wg)), where (TopK(â ))i := â i if â i is among the top-K coordinates of logits â â Rn and (TopK(â ))i := â â otherwise. The value of K â the number of experts used per token â is a hyper-parameter that modu- lates the amount of compute used to process each token. If one increases n while keeping K fixed, one
2401.04088#4
2401.04088#6
2401.04088
[ "1905.07830" ]
2401.04088#6
Mixtral of Experts
# 1https://mistral.ai/news/mixtral-of-experts/ 2 4096 32 128 14336 32 8 32768 32000 8 2 can increase the modelâ s parameter count while keeping its computational cost effectively constant. This motivates a distinction between the modelâ s total parameter count (commonly referenced as the sparse parameter count), which grows with n, and the number of parameters used for processing an individual token (called the active parameter count), which grows with K up to n. MoE layers can be run efficiently on single GPUs with high performance specialized kernels. For example, Megablocks [13] casts the feed-forward network (FFN) operations of the MoE layer as large sparse matrix multiplications, significantly enhancing the execution speed and naturally handling cases where different experts get a variable number of tokens assigned to them. Moreover, the MoE layer can be distributed to multiple GPUs through standard Model Parallelism techniques, and through a particular kind of partitioning strategy called Expert Parallelism (EP) [28]. During the MoE layerâ s execution, tokens meant to be processed by a specific expert are routed to the corresponding GPU for processing, and the expertâ s output is returned to the original token location. Note that EP introduces challenges in load balancing, as it is essential to distribute the workload evenly across the GPUs to prevent overloading individual GPUs or hitting computational bottlenecks. In a Transformer model, the MoE layer is applied independently per token and replaces the feed-forward (FFN) sub-block of the transformer block. For Mixtral we use the same SwiGLU architecture as the expert function Ei(x) and set K = 2. This means each token is routed to two SwiGLU sub-blocks with different sets of weights. Taking this all together, the output y for an input token x is computed as: n-1 y= Ss Softmax(Top2(a - W,)); - SwiGLU;(a). i=0 This formulation is similar to the GShard architecture [21], with the exceptions that we replace all FFN sub-blocks by MoE layers while GShard replaces every other block, and that GShard uses a more elaborate gating strategy for the second expert assigned to each token. # 3 Results
2401.04088#5
2401.04088#7
2401.04088
[ "1905.07830" ]
2401.04088#7
Mixtral of Experts
We compare Mixtral to Llama, and re-run all benchmarks with our own evaluation pipeline for fair comparison. We measure performance on a wide variety of tasks categorized as follow: â ¢ Commonsense Reasoning (0-shot): Hellaswag [32], Winogrande [26], PIQA [3], SIQA [27], OpenbookQA [22], ARC-Easy, ARC-Challenge [8], CommonsenseQA [30] World Knowledge (5-shot): NaturalQuestions [20], TriviaQA [19] â ¢ Reading Comprehension (0-shot): BoolQ [7], QuAC [5] â ¢ Math: GSM8K [9] (8-shot) with maj@8 and MATH [17] (4-shot) with maj@4 â ¢ Code: Humaneval [4] (0-shot) and MBPP [1] (3-shot) â ¢ Popular aggregated results: MMLU [16] (5-shot), BBH [29] (3-shot), and AGI Eval [34] (3-5-shot, English multiple-choice questions only) 80 SE Mistral 78 = LLaMA27B = Sl LLaMA134B, jam Mistral 78 = LlaMA27B Ss LLAMA 1348, cee Mixtral 8x78 Sm LLaMA213B° mmm LLaMA2 70B je Mixtral 8x78 mm LlaMA2138 lm LLaMA2 708 70 50 60 50 20 40 10 BH Code MMU Knowledge Reasoning â Comprehension AGI Eval Math â Accuracy (%) Figure 2: Performance of Mixtral and different Llama models on a wide range of benchmarks. All models were re-evaluated on all metrics with our evaluation pipeline for accurate comparison. Mixtral outperforms or matches Llama 2 70B on all benchmarks. In particular, it is vastly superior in mathematics and code generation.
2401.04088#6
2401.04088#8
2401.04088
[ "1905.07830" ]
2401.04088#8
Mixtral of Experts
3 Active Params MMLU HellaS WinoG PIQA Arc-e Arc-c NQ TriQA HumanE MBPP Math GSM8K 7B 44.4% 77.1% 69.5% 77.9% 68.7% 43.2% 17.5% 56.6% 11.6% 26.1% 3.9% 16.0% 13B 55.6% 80.7% 72.9% 80.8% 75.2% 48.8% 16.7% 64.0% 18.9% 35.4% 6.0% 34.3% 33B 56.8% 83.7% 76.2% 82.2% 79.6% 54.4% 24.1% 68.5% 25.0% 40.9% 8.4% 44.1% 70B 69.9% 85.4% 80.4% 82.6% 79.9% 56.5% 25.4% 73.0% 29.3% 49.8% 13.8% 69.6% 7B 62.5% 81.0% 74.2% 82.2% 80.5% 54.9% 23.2% 62.5% 26.2% 50.2% 12.7% 50.0% 13B 70.6% 84.4% 77.2% 83.6% 83.1% 59.7% 30.6% 71.5% 40.2% 60.7% 28.4% 74.4% Table 2: Comparison of Mixtral with Llama. Mixtral outperforms or matches Llama 2 70B performance on almost all popular benchmarks while using 5x fewer active parameters during inference. 70 Mixtral 8x7B. â Mixtral 8x7B Mixtral 8x7B 355 =o = Es & E60!
2401.04088#7
2401.04088#9
2401.04088
[ "1905.07830" ]
2401.04088#9
Mixtral of Experts
Mistral 78 % 2681 Mistral 78 3 3 s0 5 = A % 66 50 g 4 45 64 78 138 348708 78 138 348708 78 138 348 70B S66 Mixtral 8x7B 50 Mixtral 8x7B 5 = 564 340 g al Mistral 78 ee Mistral 78 3 5 § 30 5 eo â = Mistral ° 20 â e LlaMA2 78 (138 348 70B 7B (138 348 708 7B «13B 34B 708 Active Params Active Params Active Params Figure 3: Results on MMLU, commonsense reasoning, world knowledge and reading comprehension, math and code for Mistral (7B/8x7B) vs Llama 2 (7B/13B/70B). Mixtral largely outperforms Llama 2 70B on all benchmarks, except on reading comprehension benchmarks while using 5x lower active parameters. It is also vastly superior to Llama 2 70B on code and math. Detailed results for Mixtral, Mistral 7B and Llama 2 7B/13B/70B and Llama 1 34B2 are reported in Table 2. Figure 2 compares the performance of Mixtral with the Llama models in different categories. Mixtral surpasses Llama 2 70B across most metrics. In particular, Mixtral displays a superior performance in code and mathematics benchmarks. Size and Efficiency. We compare our performance to the Llama 2 family, aiming to understand Mixtral modelsâ efficiency in the cost-performance spectrum (see Figure 3). As a sparse Mixture- of-Experts model, Mixtral only uses 13B active parameters for each token. With 5x lower active parameters, Mixtral is able to outperform Llama 2 70B across most categories. Note that this analysis focuses on the active parameter count (see Section 2.1), which is directly proportional to the inference compute cost, but does not consider the memory costs and hardware utilization.
2401.04088#8
2401.04088#10
2401.04088
[ "1905.07830" ]
2401.04088#10
Mixtral of Experts
The memory costs for serving Mixtral are proportional to its sparse parameter count, 47B, which is still smaller than Llama 2 70B. As for device utilization, we note that the SMoEs layer introduces additional overhead due to the routing mechanism and due to the increased memory loads when running more than one expert per device. They are more suitable for batched workloads where one can reach a good degree of arithmetic intensity. Comparison with Llama 2 70B and GPT-3.5. In Table 3, we report the performance of Mixtral 8x7B compared to Llama 2 70B and GPT-3.5. We observe that Mixtral performs similarly or above the two other models. On MMLU, Mixtral obtains a better performance, despite its significantly smaller capacity (47B tokens compared to 70B). For MT Bench, we report the performance of the latest GPT-3.5-Turbo model available, gpt-3.5-turbo-1106. 2Since Llama 2 34B was not open-sourced, we report results for Llama 1 34B.
2401.04088#9
2401.04088#11
2401.04088
[ "1905.07830" ]
2401.04088#11
Mixtral of Experts
4 LLaMA 2 70B GPT-3.5 MMLU (MCQ in 57 subjects) 69.9% 70.0% 70.6% HellaSwag (10-shot) 87.1% 85.5% 86.7% ARC Challenge (25-shot) 85.1% 85.2% 85.8% WinoGrande (5-shot) 83.2% 81.6% 81.2% MBPP (pass@1) 49.8% 52.2% 60.7% GSM-8K (5-shot) 53.6% 57.1% 58.4% MT Bench (for Instruct Models) 6.86 8.32 8.30 # Mixtral 8x7B Table 3: Comparison of Mixtral with Llama 2 70B and GPT-3.5. Mixtral outperforms or matches Llama 2 70B and GPT-3.5 performance on most metrics. Evaluation Differences. On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts. # 3.1 Multilingual benchmarks Compared to Mistral 7B, we significantly upsample the proportion of multilingual data during pretraining. The extra capacity allows Mixtral to perform well on multilingual benchmarks while maintaining a high accuracy in English. In particular, Mixtral significantly outperforms Llama 2 70B in French, German, Spanish, and Italian, as shown in Table 4.
2401.04088#10
2401.04088#12
2401.04088
[ "1905.07830" ]
2401.04088#12
Mixtral of Experts
Active Params French Arc-c HellaS MMLU German Arc-c HellaS MMLU Spanish Arc-c HellaS MMLU Italian Arc-c HellaS MMLU 33B 70B 13B 42.9% 65.4% 49.0% 39.3% 68.1% 49.9% 49.9% 72.5% 64.3% 49.4% 70.9% 65.1% 58.2% 77.4% 70.9% 54.3% 73.0% 71.5% 55.4% 77.6% 72.5% 52.8% 75.1% 70.9% 41.1% 63.3% 48.7% 47.3% 68.7% 64.2% 45.7% 69.8% 52.3% 50.5% 74.5% 66.0% Table 4: Comparison of Mixtral with Llama on Multilingual Benchmarks. On ARC Challenge, Hellaswag, and MMLU, Mixtral outperforms Llama 2 70B on 4 languages: French, German, Spanish, and Italian.
2401.04088#11
2401.04088#13
2401.04088
[ "1905.07830" ]
2401.04088#13
Mixtral of Experts
# 3.2 Long range performance To assess the capabilities of Mixtral to tackle long context, we evaluate it on the passkey retrieval task introduced in [23], a synthetic task designed to measure the ability of the model to retrieve a passkey inserted randomly in a long prompt. Results in Figure 4 (Left) show that Mixtral achieves a 100% retrieval accuracy regardless of the context length or the position of passkey in the sequence. Figure 4 (Right) shows that the perplexity of Mixtral on a subset of the proof-pile dataset [2] decreases monotonically as the size of the context increases. Passkey Performance ry 0.8 0.6 04 0.2 0.0 OK 4K 8K 12K 16K 20K 24K 28K Seq Len Passkey Loc 3.8 â Mixtral_8x7B 3.5 32 > $3.0 i] 228 fos a 2.0 0 5k 10k 15k 20k 25k 30k Context length Passkey Performance ry 3.8 â Mixtral_8x7B 3.5 0.8 32 > 0.6 $3.0 i] 228 04 fos 0.2 a 2.0 0.0 OK 4K 8K 12K 16K 20K 24K 28K 0 5k 10k 15k 20k 25k 30k Seq Len Context length Figure 4: Long range performance of Mixtral. (Left) Mixtral has 100% retrieval accuracy of the Passkey task regardless of the location of the passkey and length of the input sequence. (Right) The perplexity of Mixtral on the proof-pile dataset decreases monotonically as the context length increases. 5 # 3.3 Bias Benchmarks To identify possible flaws to be corrected by fine-tuning / preference modeling, we measure the base model performance on Bias Benchmark for QA (BBQ) [24] and Bias in Open-Ended Language Generation Dataset (BOLD) [10].
2401.04088#12
2401.04088#14
2401.04088
[ "1905.07830" ]
2401.04088#14
Mixtral of Experts
BBQ is a dataset of hand-written question sets that target attested social biases against nine differ- ent socially-relevant categories: age, dis- ability status, gender identity, nationality, physical appearance, race/ethnicity, religion, socio-economic status, sexual orientation. BOLD is a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains. Llama 2 70B Mixtral 8x7B BBQ accuracy 51.5% 56.0% BOLD sentiment score (avg ± std) gender profession religious_ideology political_ideology race 0.293 ± 0.073 0.218 ± 0.073 0.188 ± 0.133 0.149 ± 0.140 0.232 ± 0.049 0.323 ±0.045 0.243 ± 0.087 0.144 ± 0.089 0.186 ± 0.146 0.232 ± 0.052 Figure 5: Bias Benchmarks. Compared Llama 2 70B, Mixtral presents less bias (higher accuracy on BBQ, lower std on BOLD) and displays more positive sentiment (higher avg on BOLD). We benchmark Llama 2 and Mixtral on BBQ and BOLD with our evaluation framework and report the results in Table 5. Compared to Llama 2, Mixtral presents less bias on the BBQ benchmark (56.0% vs 51.5%). For each group in BOLD, a higher average sentiment score means more positive sentiments and a lower standard deviation indicates less bias within the group. Overall, Mixtral displays more positive sentiments than Llama 2, with similar variances within each group.
2401.04088#13
2401.04088#15
2401.04088
[ "1905.07830" ]
2401.04088#15
Mixtral of Experts
# Instruction Fine-tuning We train Mixtral â Instruct using supervised fine-tuning (SFT) on an instruction dataset followed by Direct Preference Optimization (DPO) [25] on a paired feedback dataset. Mixtral â Instruct reaches a score of 8.30 on MT-Bench [33] (see Table 2), making it the best open-weights model as of December 2023. Independent human evaluation conducted by LMSys is reported in Figure 63 and shows that Mixtral â Instruct outperforms GPT-3.5-Turbo, Gemini Pro, Claude-2.1, and Llama 2 70B chat. vs Arena Elo rating 1 MT-bench (score) License 1243 9.32 Proprietary 1192 8.96 Proprietary 1158 9.18 Proprietary Glaude-4 1149 7.9 Proprietary Claude-2.0 1131 8.06 Proprietary 1121 eS) Apache 2.0 Glaude-2.4 1117 8.18 Proprietary GPT-3..5-Turbo-9613 1117 8.39 Proprietary Gemini..Pro 1141 Proprietary Glas ta 1110 7.85 Proprietary Tulu-2-0P0-708 1110 7.89 AI2 ImpACT Low-risk Yi-34B-Chat 1110 Yi License GPT-3.5:Turbo-0314 1105 7.94 Proprietary Llama-2-79b-chat 1077 6.86 Llama 2 Community
2401.04088#14
2401.04088#16
2401.04088
[ "1905.07830" ]
2401.04088#16
Mixtral of Experts
Figure 6: LMSys Leaderboard. (Screenshot from Dec 22, 2023) Mixtral 8x7B Instruct v0.1 achieves an Arena Elo rating of 1121 outperforming Claude-2.1 (1117), all versions of GPT-3.5-Turbo (1117 best), Gemini Pro (1111), and Llama-2-70b-chat (1077). Mixtral is currently the best open-weights model by a large margin. 3https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard 6 # 5 Routing analysis In this section, we perform a small analysis on the expert selection by the router. In particular, we are interested to see if during training some experts specialized to some specific domains (e.g. mathematics, biology, philosophy, etc.). To investigate this, we measure the distribution of selected experts on different subsets of The Pile validation dataset [14]. Results are presented in Figure 7, for layers 0, 15, and 31 (layers 0 and 31 respectively being the first and the last layers of the model). Surprisingly, we do not observe obvious patterns in the assignment of experts based on the topic. For instance, at all layers, the distribution of expert assignment is very similar for ArXiv papers (written in Latex), for biology (PubMed Abstracts), and for Philosophy (PhilPapers) documents. Only for DM Mathematics we note a marginally different distribution of experts. This divergence is likely a consequence of the datasetâ s synthetic nature and its limited coverage of the natural language spectrum, and is particularly noticeable at the first and last layers, where the hidden states are very correlated to the input and output embeddings respectively. This suggests that the router does exhibit some structured syntactic behavior. Figure 8 shows examples of text from different domains (Python code, mathematics, and English), where each token is highlighted with a background color corresponding to its selected expert. The figure shows that words such as â selfâ in Python and â Questionâ
2401.04088#15
2401.04088#17
2401.04088
[ "1905.07830" ]
2401.04088#17
Mixtral of Experts
in English often get routed through the same expert even though they involve multiple tokens. Similarly, in code, the indentation tokens are always assigned to the same experts, particularly at the first and last layers where the hidden states are more correlated to the input and output of the model. We also note from Figure 8 that consecutive tokens are often assigned the same experts. In fact, we observe some degree of positional locality in The Pile datasets. Table 5 shows the proportion of con- secutive tokens that get the same expert assignments per domain and layer. The proportion of repeated 0.20 0.15 0.10 0.05 layer: 15 0.20 0.15 0.10 0.05 layer: 31 Selection proportion 0.20 0.15 0.10 0.05 Expert ID | | ArXiv | Github | | PhilPapers | StackExchange | | DM Mathematics | | Gutenberg | | PubMed Abstracts | | Wikipedia (en) Figure 7: Proportion of tokens assigned to each expert on different domains from The Pile dataset for layers 0, 15, and 31. The gray dashed vertical line marks 1/8, i.e. the proportion expected with uniform sampling. Here, we consider experts that are either selected as a first or second choice by the router. A breakdown of the proportion of assignments done in each case cane be seen in Figure 9 in the Appendix. 7
2401.04088#16
2401.04088#18
2401.04088
[ "1905.07830" ]
2401.04088#18
Mixtral of Experts
Layer 0 First choice Layer 15 Layer 31 Layer 0 First or second choice Layer 15 Layer 31 ArXiv DM Mathematics Github Gutenberg PhilPapers PubMed Abstracts StackExchange Wikipedia (en) 14.0% 14.1% 14.9% 13.9% 13.6% 14.2% 13.6% 14.4% 27.9% 28.4% 28.1% 26.1% 25.3% 24.6% 27.2% 23.6% 22.7% 19.7% 19.7% 26.3% 22.1% 22.0% 23.6% 25.3% 46.5% 44.9% 49.9% 49.5% 46.9% 48.6% 48.2% 49.8% 62.3% 67.0% 66.9% 63.1% 61.9% 61.6% 64.6% 62.1% 52.9% 44.5% 49.2% 52.2% 51.3% 51.8% 53.6% 51.8% Table 5: Percentage of expert assignment repetitions. We evaluate the proportion of times the same expert is assigned to a token i and its following token i+1. We report whether the first chosen expert is the same, or whether the same expert is observed as first or second choice in consecutive tokens. For reference, the expected proportion of repetitions in the case of random assignments is 1 5 7 â 46% for â First and second choiceâ .
2401.04088#17
2401.04088#19
2401.04088
[ "1905.07830" ]
2401.04088#19
Mixtral of Experts
Repetitions at the first layer are close to random, but are significantly higher at layers 15 and 31. The high number of repetitions shows that expert choice exhibits high temporal locality at these layers. consecutive assignments is significantly higher than random for higher layers. This has implications in how one might optimize the model for fast training and inference. For example, cases with high locality are more likely to cause over-subscription of certain experts when doing Expert Parallelism. Conversely, this locality can be leveraged for caching, as is done in [11]. A more complete view of these same expert frequency is provided for all layers and across datasets in Figure 10 in the Appendix. # 6 Conclusion In this paper, we introduced Mixtral 8x7B, the first mixture-of-experts network to reach a state-of-the- art performance among open-source models. Mixtral 8x7B Instruct outperforms Claude-2.1, Gem- ini Pro, and GPT-3.5 Turbo on human evaluation benchmarks. Because it only uses two experts at each time step, Mixtral only uses 13B active parameters per token while outperforming the previous best model using 70B parameters per token (Llama 2 70B). We are making our trained and fine-tuned mod- els publicly available under the Apache 2.0 license. By sharing our models, we aim to facilitate the de- velopment of new techniques and applications that can benefit a wide range of industries and domains. Layer 0 Layer 15 Layer 31 class MoeLayer(nn. Module) : â
2401.04088#18
2401.04088#20
2401.04088
[ "1905.07830" ]
2401.04088#20
Mixtral of Experts
init__(self, experts//List [nn.Modutel,) | Super (V7 init assert len(experts) > 0 self. experts = nn.ModuleList((experts) self. gate = gate self.args = moe_args def forward(self, inputs: torch.Tensor): inputs _squashed = inputs. view(-1,_ inputs.| gate_logits = self.gatel inputs_squashed) weights, selected_experts = torch. topk( gate_logits, Self-args.nun_experts_é weights! = nri.|funct ional softinax'( weights, din=1, dtype=torch. float, ).type_as|(inputs) results| = torch. zeros_ ike! linputs_squashe for i, expert in enunerate(self. experts): batch_idx,! nth_expert = torch. wnere( results [batch_idx] += weights [batch_i input s_squashed [batch_idx] ) return resutts:.view las{(inputs) class NoeLayer (nn. Module) = def _ init__(self, experts! List'{nri.Modulelly Super (Tz init_t assert len (experts) > 9) self.experts = nn. ModuleList((experits)) def forward(self, inputs: torch. Tensor)?! inputs_squashed = inputs.View(-1) inputs) gate_logits = self.gatel inputs_squashed) weights, selected_experts = torch. topk( getellogits, self.argssnun_experts pe weightsâ = nn. functionallsoftmax(® Weights, dtypextorch. floaty ) type_as (inputs) results| = torch. zerdsillikel(input siiequashe| for i, expert in enumerate (self. experts): batch idx, nth_expert = torch.where(s results [batch_idx] += weights [batch_i¢ inputs|_squashed[batch idx], y return resultsiiview jas (inputs) class| MoeLayer(nn. Module): def init__(self, expertsâ List|fifi.Modulel) Super(Ve_init_O) assert len(experts) > 0 self, experts = nn.ModuleListl(@xperits)) self. gate = gate Self.args = moe_args def forward(self, inputs: torch.
2401.04088#19
2401.04088#21
2401.04088
[ "1905.07830" ]
2401.04088#21
Mixtral of Experts
Tensor): inputs_squashed = inputs.view(=1, inputs) gate_logits = self.gate( inputs_squashed) weights, selected_experts = torch. topk( gate_logits, self.argssfum_experts_pe weights) nni.unct iorial.isoftinax( YP Yiitype_as (inputs) results = torch. zerosillikel(inputslisquashe| for i, expert in enunerate(self.experts): batch_idx, nth_expert = torch.where(s results [batch_idx] += weights [batch_i¢ inputs_squashed [batch_idx] ) return) results\iviewilas|(inputs)) Tuestiond] Solve â AINr 27K SLIT! and SORT, lanswers 4 Question?â Calculate Baiasoazusaaly 4111270 iAnswer: -841469015.544 (Question! Letâ x(gy = 94g # Hl Let! q(clJ= Zee #] IAnswer: S4ea - 30 â Question#! Solve Azer Â¥ 27HE = Ate and 1505 lanswer:) 4 Calculate ~eaieseiaz. saa Â¥ 417127. ~841469015.544 â Answer: (Questor â Answer: etâ x(q) = 9*g Â¥ Wl Let! ql)! = 2eele Sara â 30 question Solve -42Â¥e1E B7eC= â Ad67 and 130%] answers \question®| calculate savesona2.saq + auaz7. Answer: -847469015.544 â OÂ¥o)H A Let q(el = (questiond! Let! x(a) = awed | Answers 54a ~ â A model airplane flies stower when flying into tt jwind and faster with wind at its back. when Launcl Iright angles to the wind,â cross wind,| its groun Icompared with! flying in still air is (A) the same (B) greater (C) less (0)! either! grea lor less dependingâ on wind speed i nodelaitp ane) URE slover when flying into eH lind and faster with wind at its back. When) launch Tight angles to the wind, a cross wind,. its) grounc Compared with â
2401.04088#20
2401.04088#22
2401.04088
[ "1905.07830" ]
2401.04088#22
Mixtral of Experts
lying in stitt air is (A) the same (18) greater) (C) less (D)! either grea lor less depending on wind speed H model airplane flies slower! when flying inte th wind and faster with wind at its backâ . When Launcl [right angles to the wind, a cross wind, its grounc Icompared with flying in still air is (A) the sane (B) greater (C) less (0)! either gree jor less depending on wind speed Figure 8: Text samples where each token is colored with the first expert choice. The selection of experts appears to be more aligned with the syntax rather than the domain, especially at the initial and final layers.
2401.04088#21
2401.04088#23
2401.04088
[ "1905.07830" ]
2401.04088#23
Mixtral of Experts
8 # Acknowledgements We thank the CoreWeave and Scaleway teams for technical support as we trained our models. We are grateful to NVIDIA for supporting us in integrating TensorRT-LLM and Triton and working alongside us to make a sparse mixture of experts compatible with TensorRT-LLM. # References [1] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. [2] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
2401.04088#22
2401.04088#24
2401.04088
[ "1905.07830" ]
2401.04088#24
Mixtral of Experts
Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023. [3] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, pages 7432â 7439, 2020. [4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. [5] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer.
2401.04088#23
2401.04088#25
2401.04088
[ "1905.07830" ]
2401.04088#25
Mixtral of Experts
Quac: Question answering in context. arXiv preprint arXiv:1808.07036, 2018. [6] Aidan Clark, Diego De Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al. Unified scaling laws for routed language models. In International Conference on Machine Learning, pages 4057â 4086. PMLR, 2022. [7] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova.
2401.04088#24
2401.04088#26
2401.04088
[ "1905.07830" ]
2401.04088#26
Mixtral of Experts
Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. [8] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. [9] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [10] Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta.
2401.04088#25
2401.04088#27
2401.04088
[ "1905.07830" ]
2401.04088#27
Mixtral of Experts
Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 862â 872, 2021. [11] Artyom Eliseev and Denis Mazur. Fast inference of mixture-of-experts language models with offloading. arXiv preprint arXiv:2312.17238, 2023. [12] William Fedus, Jeff Dean, and Barret Zoph. A review of sparse expert models in deep learning. arXiv preprint arXiv:2209.01667, 2022. [13] Trevor Gale, Deepak Narayanan, Cliff Young, and Matei Zaharia. Megablocks: Efficient sparse training with mixture-of-experts. arXiv preprint arXiv:2211.15841, 2022.
2401.04088#26
2401.04088#28
2401.04088
[ "1905.07830" ]
2401.04088#28
Mixtral of Experts
[14] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. [15] Hussein Hazimeh, Zhe Zhao, Aakanksha Chowdhery, Maheswaran Sathiamoorthy, Yihua Chen, Rahul Mazumder, Lichan Hong, and Ed Chi.
2401.04088#27
2401.04088#29
2401.04088
[ "1905.07830" ]
2401.04088#29
Mixtral of Experts
Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning. Advances in Neural Information Processing Systems, 34:29335â 29347, 2021. 9 [16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. [17] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. [18] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [19] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer.
2401.04088#28
2401.04088#30
2401.04088
[ "1905.07830" ]
2401.04088#30
Mixtral of Experts
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. [20] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, pages 453â 466, 2019. [21] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with condi- tional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020. [22] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. [23] Amirkeivan Mohtashami and Martin Jaggi.
2401.04088#29
2401.04088#31
2401.04088
[ "1905.07830" ]
2401.04088#31
Mixtral of Experts
Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300, 2023. [24] Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thomp- son, Phu Mon Htut, and Samuel R Bowman. Bbq: A hand-built bias benchmark for question answering. arXiv preprint arXiv:2110.08193, 2021. [25] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
2401.04088#30
2401.04088#32
2401.04088
[ "1905.07830" ]
2401.04088#32
Mixtral of Experts
[26] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, pages 99â 106, 2021. [27] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019.
2401.04088#31
2401.04088#33
2401.04088
[ "1905.07830" ]
2401.04088#33
Mixtral of Experts
[28] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [29] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. [30] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018. [31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Šukasz Kaiser, and Illia Polosukhin.
2401.04088#32
2401.04088#34
2401.04088
[ "1905.07830" ]
2401.04088#34
Mixtral of Experts
Attention is all you need. Advances in neural information processing systems, 30, 2017. [32] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. [33] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
2401.04088#33
2401.04088#35
2401.04088
[ "1905.07830" ]
2401.04088#35
Mixtral of Experts
10 [34] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364, 2023. [35] Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M Dai, Quoc V Le, James Laudon, et al. Mixture-of-experts with expert choice routing. Advances in Neural Information Processing Systems, 35:7103â 7114, 2022. 11 # Either choice 0 Layer -- 0.3 0.2 0 Layer 0 -- First choice 0.3 Layer 0 -- Second choice 0.3 < 2 t Layer 15 -- First choice fe} Q 0.3 ° a 0.2 el (el er rere! ie it len | ie} o 0 v Layer 15 -- Second choice 8 03 0.2 0 Layer 31 -- Either choice # Expert ID
2401.04088#34
2401.04088#36
2401.04088
[ "1905.07830" ]
2401.04088#36
Mixtral of Experts
ArXiv Github PhilPapers. StackExchange |_| | |_| | | DM Mathematics | Gutenberg || PubMed Abstracts | Wikipedia (en) Figure 9: Proportion of tokens assigned to each expert on different subsets from The Pile dataset, separated by whether the expert was selected as first or second choice, or either. The â Either choiceâ case is equivalent to Figure 7. The gray dashed vertical line marks 1 12 First choice 9 w is) ° N a ° N is) ° An wu 0.7 0.6 Proportion of repeated assignments 0.5 Layer source â e ArXiv â eâ DM Mathematics â e Github â eâ Gutenberg â eâ PhilPapers â eâ PubMed â e- StackExchange â e-â
2401.04088#35
2401.04088#37
2401.04088
[ "1905.07830" ]
2401.04088#37
Mixtral of Experts
Wikipedia (en) # Abstracts Figure 10: Repeated consecutive assignments per MoE layer. Repeated assignments occur a lot more often than they would with uniform assignments (materialized by the dashed lines). Patterns are similar across datasets with less repetitions for DM Mathematics. 13
2401.04088#36
2401.04088
[ "1905.07830" ]
2312.17238#0
Fast Inference of Mixture-of-Experts Language Models with Offloading
3 2 0 2 c e D 8 2 ] G L . s c [ 1 v 8 3 2 7 1 . 2 1 3 2 : v i X r a # Fast Inference of Mixture-of-Experts Language Models with Offloading Artyom Eliseev Moscow Institute of Physics and Technology Yandex School of Data Analysis lavawolfiee@gmail.com # Denis Mazur Moscow Institute of Physics and Technology Yandex Researchcore denismazur8@gmail.com # Abstract
2312.17238#1
2312.17238
[ "2302.13971" ]
2312.17238#1
Fast Inference of Mixture-of-Experts Language Models with Offloading
With the widespread adoption of Large Language Models (LLMs), many deep learning practitioners are looking for strategies of running these models more efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) â a type of model architectures where only a fraction of model layers are active for any given input. This property allows MoE-based language models to generate tokens faster than their â denseâ counterparts, but it also increases model size due to having multiple â expertsâ . Unfortunately, this makes state-of-the-art MoE language models difficult to run without high-end GPUs. In this work, we study the problem of running large MoE language models on consumer hardware with limited accelerator memory. We build upon parameter offloading algorithms and propose a novel strategy that accelerates offloading by taking advantage of innate properties of MoE LLMs. Using this strategy, we build can run Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google Colab instances. # Introduction Many recent advances in natural language processing rely on large pre-trained language models, such as GPT-3 and 4 Brown et al. (2020); OpenAI (2023), Palm & Gemini Chowdhery et al. (2022); Team et al. (2023) and many others. However, the rapid scientific progress in this area would be impossible without open-access LLMs such as LLaMA 1 and 2 (Touvron et al., 2023), Falcon (TII UAE, 2023), BLOOM (Scao et al., 2022), OPT (Zhang et al., 2022), or NeoX/Pythia (Biderman et al., 2023). The key advantage of open-access LLMs is that researchers can deploy them locally and modify them in ways that would be impossible with proprietary APIs. Even though LLM parameters are openly available, it is still difficult to use these models due to their sheer size. State-of-the-art open-access language models require multiple high-end GPUs 1 even for basic inference workloads.
2312.17238#0
2312.17238#2
2312.17238
[ "2302.13971" ]
2312.17238#2
Fast Inference of Mixture-of-Experts Language Models with Offloading
To use these LLMs on more affordable hardware setups, one must either compress model parameters (Dettmers et al., 2022; Frantar et al., 2022) or offload parameters to a cheaper storage, be it RAM or SSD (Pudipeddi et al., 2020; Sheng et al., 2023). Several recent works modify transformer architecture by introducing sparse Mixture-of-Experts blocks (Jacobs et al., 1991; Shazeer et al., 2017). MoE blocks contain multiple â expertsâ (layers), as well as a â gating functionâ that selects which experts are used on a given input. As a result, the MoE block uses a small portion of all â expertsâ
2312.17238#1
2312.17238#3
2312.17238
[ "2302.13971" ]
2312.17238#3
Fast Inference of Mixture-of-Experts Language Models with Offloading
for any single forward pass, allowing for more compute-efficient training Fedus et al. (2021); Du et al. (2022). Notably, MoEs are among the largest Fedus et al. (2021) and among the best Mixtral AI team (2023) of available LLMs. While Mixture-of-Experts models can be more efficient than their dense counterparts, many techniques for efficient LLM inference were not designed with MoE in mind and perform suboptimally on modern large language models that use mixture-of-experts layers. 1When deployed in 16-bit precision, Falcon-180B needs approximately 360GB, while LLaMA-2 70B requires 140GB of combined accelerator memory. In this work, we systematically develop techniques for running large MoE language models with limited GPU memory. Our main objective is inferencing (generating tokens) with Mixtral-8x7B- Instruct â a MoE-based chat assistant â on a desktop-grade hardware where only a fraction of experts fit into the accelerator memory. To that end: we observe how MoE language model accesses its experts between tokens, and find several regularities: i) some experts are reused between adjacent tokens and ii) the model hidden states of early layers already â knowâ which experts are to be used at subsequent layers. â ¢ we design a MoE-specific offloading strategy that takes advantage of these regularities: i) it uses LRU cache to significantly reduces GPU-RAM communication, leading to faster generation and ii) it guesses which experts are needed ahead of time to better overlap expert loading with computation.
2312.17238#2
2312.17238#4
2312.17238
[ "2302.13971" ]
2312.17238#4
Fast Inference of Mixture-of-Experts Language Models with Offloading
â ¢ we consider the specific scenario of running Mixtral-8x7B-Instruct on a T4, RTX 3060 and RTX 3080 Mobile and develop a practical combination of mixed quantization and the proposed offloading algorithm to run this model interactively at 2-3 tokens per second depending on the hardware. The source code with our implementation is available online2 # 2 Background & Related Work # 2.1 Mixture-of-Experts The recent surge in MoE language models builds on a relatively old idea (Jacobs et al., 1991; Jordan & Jacobs, 1994) of training ensembles of specialized models (â expertsâ ) and a gating function to select the right expert for the task. To achieve specialization, Mixture-of-Experts learn by simultaneously i) training the gating function to choose the best experts and ii) training the experts themselves on samples assigned to them by the gating function. Since then, many different MoE variants emerged, including mixture of SVM models (Collobert et al., 2002), Dirichlet processes (Shahbaba & Neal, 2009) and various neural networks. Shazeer et al. (2017) builds on this idea to train a sparsely gated Mixture-of-Experts to serve as a language model. The full model consists of a recurrent neural network backbone and a MoE module with up to 131072 experts. When processing a given token, a linear gating function select 4 most suitable experts based on the latest hidden state. The resulting model (including the gating function and experts) is trained end-to-end to minimize cross-entropy, with an additional regularizer to promote equal expert utilization. Shazeer et al. (2017) observed that the MoE model not only improves perplexity, but also learns interpretable expert specializations: some experts would â specializeâ on prepositions, while others learn to express a particular concept (e.g. speed). Since then, several lines of work explore Mixture-of-Experts with Transformer-based language models for machine translation Lepikhin et al. (2020), masked language modeling Fedus et al. (2021), general-purpose LLMs Du et al. (2022) and others.
2312.17238#3
2312.17238#5
2312.17238
[ "2302.13971" ]
2312.17238#5
Fast Inference of Mixture-of-Experts Language Models with Offloading
Most of these models follow traditional (dense) Transformer architecture for embeddings and attention layers, and only use Mixture for the feedforward (MLP) blocks and use a linear token-level gating function. A common observation across most of these works is that MoE models are cheaper to train and inference Fedus et al. (2021); Lepikhin et al. (2020), but require more parameters than a dense model with equivalent perplexity. Pre-trained Mixture-of-Experts LLMs have been openly available for over a year3. However, these models seem to have gained less traction than equivalent dense models, arguable because their sheer model size (over a trillion parameters) makes them difficult to use. Most recently, Mistral AI released a family of sparse Mixture of Experts models called Mixtral-8x7B with near state-of-the-art performance Mixtral AI team (2023). This model has already inspired several follow-up works and practical applications, but it still requires a high-end GPU accelerator. # 2.2 Post-training Quantization of LLMs A natural way to circumvent this is to reduce the model size through quantization (Nagel et al., 2020; Gholami et al., 2021; Frantar et al., 2022), sparsification Frantar & Alistarh (2023a); Ma et al. (2023), 2https://github.com/dvmazur/mixtral-offloading 3https://huggingface.co/google/switch-c-2048, released in November 15th, 2022
2312.17238#4
2312.17238#6
2312.17238
[ "2302.13971" ]
2312.17238#6
Fast Inference of Mixture-of-Experts Language Models with Offloading
2 factorization Hsu et al. (2022), or a combination thereof. These compression types are not specific to LLMs and are based on much older methods outside the scope of our work4. However, recent works found that there are unique challenges to quantizing very large transformer-based language models due to emergent outliersDettmers et al. (2022); Lin et al. (2023); Dettmers et al. (2023). Generally speaking, the optimal compression rate for most LLMs is 4 bits per parameter Dettmers & Zettlemoyer (2022). While there are more extreme algorithms for 3- and even 2-bit compression Chee et al. (2023); Lin et al. (2023); Dettmers et al. (2023), they are typically inferior to choosing a smaller model and quantizing it to around 4 bits. Most recently, there has been several concurrent works for quantizing Mixture-of-Experts models (Kim et al., 2023; Frantar & Alistarh, 2023b). # Inference with Parameter Offloading A recent line of work explores inferencing and training large models with limited accelerator mem- ory by â offloadingâ their parameters to another, cheaper memory, such as system RAM or even SSD (Pudipeddi et al., 2020; Ren et al., 2021). This technique works by loading model parameters just-in-time when they are needed for computation. Since most deep learning models use layers in a fixed order, offloading can pre-dispatch the next layer parameters in the background, ahead of time. This technique works particularly well when processing large batches of data, during train- ing Pudipeddi et al. (2020); Ren et al. (2021) or large-batch non-interactive inference Aminabadi et al. (2022); Sheng et al. (2023), where each layer processes a lot of tokens each time the layer is loaded from RAM. In turn, when doing interactive inference (e.g. as a chat assistants), offloading works significantly slower than on-device inference. This is because interactive inference generates tokens autoregressively, from left to right. This way, the inference system processes one or few tokens at a time, and therefore spends most of the time waiting for next layerâ
2312.17238#5
2312.17238#7
2312.17238
[ "2302.13971" ]
2312.17238#7
Fast Inference of Mixture-of-Experts Language Models with Offloading
s parameters to be loaded. # 2.4 Hardware Setup While our analysis is not specific to any hardware setup, we target the hardware specifications of cheap / free-tier cloud instances Google (2023) and the upper half of gaming computers Steam (2023): i) enough system memory to hold model parameters, ii) a GPU with 11-16GB VRAM and iii) host-to-device communication at 8-16GB/s (PCIe Gen.3). If we examine popular open-access MoE models (Mixtral-8x7B and switch-c-2048), we find that all non-experts can fit a fraction of available GPU memory. In turn, the experts that constitute vast majority of model parameters do not fit even with quantization. Finally, even if we could fit the model parameters in memory, running generative inference requires additional memory for layer activations and past attention keys & values.
2312.17238#6
2312.17238#8
2312.17238
[ "2302.13971" ]
2312.17238#8
Fast Inference of Mixture-of-Experts Language Models with Offloading
# 3 Method In this work, we aim to systematically find the optimal way to inference modern Mixture-of-Experts LLMs on desktop or low-end cloud instances. More specifically, we focus on the task of generating tokens interactively, i.e. generate multiple tokens per second at batch size 15. The generative inference workload consists of two phases: 1) encoding the input prompt and 2) generating tokens conditioned on that prompt. The key difference between these two phases is that prompt tokens are encoded in parallel (layer-by-layer), whereas the generation runs sequentially (token-by-token and layer-by-layer). In general, phase 1 works relatively well with existing Mixture- of-Experts algorithms, since each layer can only be loaded once for the entire prompt. In turn, when generating tokens, one must load layer once per each token generated. In practice, this means that inference speed is limited by how fast one can fetch parameters from system memory. Below, we look for patterns in how the MoE model loads its experts and propose ways to exploit these patterns to speed up inference time. 4To learn more about these methods, please refer to surveys such as Gholami et al. (2021); Liang et al. (2021) 5As opposed to running a processing a large batch of texts over many seconds, as in Sheng et al. (2023)
2312.17238#7
2312.17238#9
2312.17238
[ "2302.13971" ]
2312.17238#9
Fast Inference of Mixture-of-Experts Language Models with Offloading
3 Selected experts for Mixtral-8x7B-Instruct woe 0 (top) and 15 ae =n a oa ao a â me: a n: ee Layer 15 expert # Layer 0 expert # MAUR STARR O However about |= and 4 training data owerful language model based trained Trans former f architecture Figure 1: An example of expert loading pattern in Mixtral-8x7B-Instruct for select layers. Blue cells indicate that a certain expert was active when encoding a certain token; deeper blue indicates higher gating weight. Small gray squares show which experts are cached with an LRU cache for k=2. # 3.1 Expert Locality and LRU caching As we discussed earlier in Section 2.1, Mixture-of-Experts language models were often observed to assign individual experts to distinct sub-tasks. However, this does not mean that the model uses the same expert over long stretches of tokens. Instead, some experts are active in short sequences of 2-4 tokens, while others are often used with â gapsâ , as shown in Figure 1. To take advantage of this pattern, we can keep active experts in GPU memory as a â cacheâ for future tokens. If the same experts are activated again in future, they will be available instantaneously. Naturally, the number of experts that can be stored this way if very limited by the available GPU memory. For simplicity, we choose to always keep k least recently used experts as a type of LRU cache. If k is greater than the number of active experts, the cache will save experts from multiple previous tokens. For simplicity, we keep the same number of cached experts for each MoE layer. We illustrate an example of how LRU cache saves experts in Figure 1 (see caption). LRU is a very simple strategy that does not consider factors like expert activation frequencies, varying cache size between MoE layers, or any sequential patterns in expert activation. However, we found that even this simple strategy can significantly speed up inference for modern Mixture-of-Experts models such as Mixtral-8x7B (see Section 4 for detailed evaluation). # 3.2 Speculative Expert Loading While LRU caching can reduce the average expert loading time, most of the inference time is still spent waiting for the next expert to be loaded.
2312.17238#8
2312.17238#10
2312.17238
[ "2302.13971" ]
2312.17238#10
Fast Inference of Mixture-of-Experts Language Models with Offloading
The reason behind this is that, unlike with dense models, MoE offloading cannot effectively overlap expert loading with computation. To understand this problem, let us zoom into the process of generating a single token, layer-by-layer. The full compute workload starts by embedding the previous token via look-up, then alternates between running self-attention and MLP for each transformer block in the model. Finally, the outputs from the last transformer block are used to predict next token logits with a linear projection. For regular (dense) models, this architecture allows for efficient offloading schedule that pre-loads the next transformer layer ahead of time, while the previous layer is still running. Unfortunately, this schedule is no longer possible for Mixture-of-Experts models, where MoE MLP layers choose which experts to load just-in-time for computation. This is because the system cannot pre-fetch the next layer until it learns which experts should be loaded. Modern open-access MoE language models choose active experts using the final outputs of the previous layer, which means they cannot be pre-fetched them in parallel with previous layer. While it is not possible6 to pre-reliably prefetch the next set of experts ahead of time, the system could still try to guess the likely next experts and load them speculatively, while processing the previous layer. It the guess is correct, it will speed up the next layer inference; if not, it can load the actual next layerâ s experts later. In other words, this type of speculative loading does not change the final model predictions, but may reduce latency if the guess is accurate enough. 6More specifically, not possible without changing the model architecture, which would require re-training
2312.17238#9
2312.17238#11
2312.17238
[ "2302.13971" ]
2312.17238#11
Fast Inference of Mixture-of-Experts Language Models with Offloading
4 While analyzing modern MoE models, we found that it is possible to get an accurate guess of next layerâ s experts by applying next layerâ s gating function to previous layerâ s hidden states â or, more specifically, to the same hidden states that are used by previous MoE layerâ s gating function. This heuristic relies on the fact that transformer layers are residual, i.e. each layer adds to the previous hidden states instead of re-computing them from scratch. This architecture introduces an inductive bias such that any layerâ s hidden states into a decent estimate of next layerâ s hidden states. # 3.3 System Design & Implementation Details In this section, we describe practical design considerations and implementation details that we used for inferencing MoE language models on consumer and low-end cloud hardware. Our system design combines the caching & prefetching techniques and a mixed MoE quantization scheme . MoE quantization. As we described earlier in Section 2.2, there are multiple weight quantization algorithms optimized for LLMs. Model compression has a natural synergy with offloading because compressed models take less time to load onto GPU. In our experitments, we also observed that MoE models get better quality-size trade-offs when quantizing experts to a lower bitwidth, while keeping all non-expert layers at 4-bit. We use Half Quadratic Quantization (HQQ) (Badri & Shaji, 2023) â a data-free quantization algorithm that supports a variety of bit rates. However, we chose this algorithm only for convenience, because it was already well tested for Mixtral models. Since our analysis does not rely on any specific choice of quantization, we believe that if we chose another quantization algorithm (e.g. GPTQ or AWQ) our conclusions would be similar. In our early experiments, we also tried the sub-1-bit quantization from QMoE Frantar & Alistarh (2023b) that worked well on the Switch-c-2048 model. However, we found that sub-1-bit compression caused too significant a loss in perplexity for Mixtral-8x7B models. Expert Offloading. As described earlier, we use LRU cache with an equal number k of cached experts per layer. For Mixtral-8x7B, we use k=2 for 12GB GPUs and k=4 for 16GB ones.
2312.17238#10
2312.17238#12
2312.17238
[ "2302.13971" ]
2312.17238#12
Fast Inference of Mixture-of-Experts Language Models with Offloading
We trigger speculative expert loading immediately after the system finished loading all experts for the current layer. The speculative expert loading fetches 1 â 2 most likely experts. The newly loaded experts do not replace the currently cached experts. If a speculatively loaded expert was later used during next layer inference, it will replace the least recently used expert from the next layerâ s cache. Many consumer devices and free-tier cloud instances have limited host RAM that cannot fit the entire model7. In these cases, the experts must be split between host and device memory. To support this, our implementation of expert LRU cache splits experts between host and GPU devices. When loading and expert to the GPU cache, the system also offloads the least recently used on-device expert back to RAM so as to preserve memory parity. To speed up offloading in practice, we allocate all expert parameters in a contiguous memory buffer that can be moved as a single host-to-device copy. For host-side (RAM) experts, we pin8 this memory buffer for faster communication. Our implementation additionally allocates b=4 on-device buffers used to copy and prefetch experts asynchronously, without modifying existing experts. These buffers are shared between all MoE layers to reduce memory footprint. Overall, the system requires num_layers à num_experts expert memory buffers split between host and device memory and b=4 temporary buffers, the size of each buffer being equal to a single expert. # 4 Experiments In this section, we verify our earlier hypotheses about MoE behavior and benchmark the inference latency in different conditions. We focus our evaluations on Mixtral-8x7B and Mixtral-8x7B-Instruct models since they represent the current state of the art among open-access MoE models. We organize this section as follows: Section 4.1 measures the effectiveness of expert caching and pre-loading in isolation, Section 4.2 compares different model compression algorithms and verifies our hypotheses from Section 3.3. Finally, Section 4.3 measures the inference latency in several hardware setups. 7Notably, Google Colab RAM cannot fit Mixtral-8x7B with a reasonable compression rate 8This corresponds to tensor.pin_memory() command in PyTorch.
2312.17238#11
2312.17238#13
2312.17238
[ "2302.13971" ]
2312.17238#13
Fast Inference of Mixture-of-Experts Language Models with Offloading
5 iy & cache_size =3 cache_size = 2 cache_size =4 0.84 | PIO â prefetch 1 experts ~ escent ae | PRS aa 0.2} â â prefetch 2 experts â â prefetch 3 experts 0.0 00 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Layer # Layer # S Fd Ed Cache hit rate Bd ES Prediction recall = ES Ss & Figure 2: (left) LRU cache hit ratio for different cache size k; (right) speculative loading recall when pre-loading a different number of experts. Regular lines represent loading 1 layer ahead; dashed line stands for 2 layers ahead; dotted line is 10 layers ahead. # 4.1 Expert LRU Cache and Speculative Loading In this section, we benchmark the effectiveness of the two expert offloading strategies: LRU caching and and speculative loading, as defined in Sections 3.1 and 3.2 respectively. For this evaluation, we measure â expert recallâ â the fraction of times when an expert needed for inference was already available on GPU. For this evaluation, we run Mixtral-8x7B-Instruct model on the OpenAssistant dataset (Köpf et al., 2023). We test LRU caching by running the model on recorded conversations and measuring the recall (aka â hit ratioâ from caching perspective) for different cache sizes k. Next, we test speculative loading in isolation by â guessingâ which experts should be loaded (by applying the next layerâ s gating function on current layer activations), then measuring how often the actual next experts get loaded this way. A recall of 1.0 corresponds to a situation where both (2) Mixtral active experts were pre-fetched. We test speculative loading in three settings: 1, 2 and 10 layers ahead. # 4.2 Mixed MoE Quantization Next, we test how different Quantization schemes affect MoE performance and size. We also use Mixtral-8x7B, but this time, we use non-instruction-tuned variant since it fits better with the available benchmarks.
2312.17238#12
2312.17238#14
2312.17238
[ "2302.13971" ]
2312.17238#14
Fast Inference of Mixture-of-Experts Language Models with Offloading
We measure WikiText2 perpliexity Merity et al. (2016), C4 perplexity Raffel et al. (2020), as well as 5-shot MMLU accuracy Hendrycks et al. (2021). Our objective for this section is to find the best trade off between size and performance for offloading with the target setups. Note that out of 46.7B total parameters in the Mixtral-8x7B model, the experts constitute 45.1B (96.6%). The rest of the model parameters are allocated to embeddings, self-attention layers, MoE gates and minor layers such as LayerNorm. Experts quant Model size, GB Wiki2 C4 MMLU Attn quant Experts quant Model size, GB FP16 4-bit 3-bit 2-bit 86.99 25.82 23.21 19.33 3.59 3.67 3.96 4.52 6.52 70.51% 6.58 70.3% 6.78 69.32% 7.31 66.66% 3-bit FP16 4-bit 3-bit 2-bit 85.08 23.92 21.31 17.46 3.99 4.06 4.34 4.90 FP16 4-bit 3-bit 2-bit 85.16 23.99 21.37 17.54 3.68 3.76 4.05 4.61 6.59 â 6.66 69.11% 6.87 68.47% 7.42 65.58% 2-bit FP16 4-bit 3-bit 2-bit 84.96 23.79 21.18 17.30 4.98 5.08 5.36 5.97 Table 1: Perplexity and model size evaluation of Mixtral-8x7B with different quantization for shared attention (Attn quant) and experts (Experts quant) layers. For comprarison, a Mistral-7B 4-bit quantized model has Wiki2 perplexity 5.03, C4 perplexity 7.56 and MMLU score 61.3%. See Section 4.2 for details.
2312.17238#13
2312.17238#15
2312.17238
[ "2302.13971" ]
2312.17238#15
Fast Inference of Mixture-of-Experts Language Models with Offloading
Green values correspond to the configurations we chose for full system evaluation. 6 Algorithm 2-bit Experts 3-bit Experts A100 3080 Mobile 3060 T4 (Colab) A100 3080 Mobile 3060 T4 (Cloud) 3.061 Full algorithm 2.918 W/o expert pre-loading 2.265 W/o LRU cache & pre-loading Naive offloading (accelerate) 1.392 2.655 2.227 1.758 1.059 2.278 2.051 1.547 0.919 2.092 1.567 1.168 0.661 2.845 2.683 2.055 1.246 2.475 2.024 1.595 0.914 2.038 1.857 1.346 1.791 1.603 1.365 1.061 0.580 Table 2: Inference speed for Mixtral-8x7B in low-tier , measured in tokens per second. As discussed earlier, we use HQQ Badri & Shaji (2023) data-free quantization algorithm and consider the following quantization schemes:
2312.17238#14
2312.17238#16
2312.17238
[ "2302.13971" ]
2312.17238#16
Fast Inference of Mixture-of-Experts Language Models with Offloading
1. FP16 (no quantization) 2. HQQ 4-bit with group size 64, scale group size 256 3. HQQ 3-bit with group size 64, scale group size 128 4. HQQ 2-bit with group size 16, scale group size 128 Note that the actual model size with n-bit quantization is larger than n bits per parameter. This is because the quantized data format also stores quantization scale and zero point for each group of weights. Notably, the above 2-bit quantization scheme uses, on average, 2.6 bits per parameter due to a large number of quantization schemes. We also keep embeddings, logits, MoE gates and normalization layers in 16-bit format. Table 1 summarizes our results: overall, it seems advantageous to quantize experts to 3 or 2 bits while keeping attention layers to a higher bitwidth (16 or 4 bits). Based on these evaluations, we chose two quantization schemes (highlighted in green) that offer favourable performance-size trade-offs within the target hardware constraints. # 4.3 Practical offloading performance Finally we evaluate the performance of the Mixtral8x7B-Instruct model using the offloading tech- niquesproposed throughout this report. Based on the perplexity evaluations from the previous section, we chose 4-bit HQQ quantization for the shared attention layers and 2- or 3-bit quantization for experts. We evaluate this system by generating tokens via sampling on OpenAssistant (Köpf et al., 2023) conversations and measuring the average number of tokens generated per second with batch size 1. For this evaluation, we always sample proportionally to the predicted probabilities, i.e. without temperature or nucleus sampling. We consider four hardware configurations: a free-tier Colab instance with a T4 GPU (16GB VRAM, PCIe Gen.3), a past generation gaming laptop with RTX 3080 Mobile (16GB, PCIe Gen.4), a mid- range gaming desktop with RTX 3060 (12GB, PCIe Gen.3) and a high-end data-center server with A100-80GB-SXM. Note that the A100 server could run the model without offloading. We use offloading on A100 mostly to provide a reference for other setups.
2312.17238#15
2312.17238#17
2312.17238
[ "2302.13971" ]
2312.17238#17
Fast Inference of Mixture-of-Experts Language Models with Offloading
Finally, when evaluating 3-bit models, we use a cloud T4 from Microsoft Azure because the free-tier colab instances did not have enough RAM for this specific configuration. We use k = 2 for RTX 3060 and k = 4 for all other GPUs. As shown in Table 2, all evaluated setups can generate 2-4 tokens per second with the full algorithm. Using pre-loading appears to be most beneficial on RTX 3060, possibly due to lower LRU cache size. Cursiously, RTX 3060 (desktop) performs nearly equally with a much higher end 3080 Mobile. We attribute this to the fact that both GPUs are still bottlenecked by host-to-device bandwidth, limited by the PCIe architecture. Finally, all schemes significantly outperform naive offloading that loads the entire MoE layer. # 5 Conclusion and Future Work In this work, we explore strategies for accelerating Mixture-of-Experts based language models on consumer hardware with limited GPU memory. We propose a MoE-centric approach to offloading
2312.17238#16
2312.17238#18
2312.17238
[ "2302.13971" ]
2312.17238#18
Fast Inference of Mixture-of-Experts Language Models with Offloading
7 and explore how mixed quantization affects perplexity and performance on language understanding tasks. We evaluate the proposed strategies and show that they produce a significant increase in generation speed compared to na¨ve approaches on consumer-grade hardware, including free-tier Google Colab. Our method provides a practical solution for inferencing large MoE language models on resource- constricted hardware, enabling broader access to these powerful models for research and development. As future work, we plan to explore further offloading strategies, based on speculative expert predic- tion.
2312.17238#17
2312.17238#19
2312.17238
[ "2302.13971" ]
2312.17238#19
Fast Inference of Mixture-of-Experts Language Models with Offloading
# Acknowledgements Authors would like to acknowledge mobicham@ for helpful discussions on Mixtral quantization. # References Aminabadi, R. Y., Rajbhandari, S., Awan, A. A., Li, C., Li, D., Zheng, E., Ruwase, O., Smith, S., Zhang, M., Rasley, J., and He, Y. Deepspeed-inference: Enabling efficient inference of transformer models at unprecedented scale. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC â
2312.17238#18
2312.17238#20
2312.17238
[ "2302.13971" ]
2312.17238#20
Fast Inference of Mixture-of-Experts Language Models with Offloading
22. IEEE Press, 2022. ISBN 9784665454445. Badri, H. and Shaji, A. Half-quadratic quantization of large machine learning models, November 2023. URL https://mobiusml.github.io/hqq_blog/. Biderman, S., Schoelkopf, H., Anthony, Q., Bradley, H., Oâ Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E., et al.
2312.17238#19
2312.17238#21
2312.17238
[ "2302.13971" ]
2312.17238#21
Fast Inference of Mixture-of-Experts Language Models with Offloading
Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. In Conference on Neural Information Processing Systems (NeurIPS), 2020. Chee, J., Cai, Y., Kuleshov, V., and Sa, C. D. Quip: 2-bit quantization of large language models with guarantees, 2023. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al.
2312.17238#20
2312.17238#22
2312.17238
[ "2302.13971" ]
2312.17238#22
Fast Inference of Mixture-of-Experts Language Models with Offloading
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Collobert, R., Bengio, S., and Bengio, Y. A parallel mixture of svms for very large scale problems. In Advances in Neural Information Processing Systems, pp. 633â 640, 2002. Dettmers, T. and Zettlemoyer, L. The case for 4-bit precision: k-bit inference scaling laws. arXiv preprint arXiv:2212.09720, 2022. Dettmers, T., Lewis, M., Belkada, Y., and Zettlemoyer, L. LLM.int8(): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, 2022. Dettmers, T., Svirschevski, R., Egiazarian, V., Kuznedelev, D., Frantar, E., Ashkboos, S., Borzunov, A., Hoefler, T., and Alistarh, D.
2312.17238#21
2312.17238#23
2312.17238
[ "2302.13971" ]
2312.17238#23
Fast Inference of Mixture-of-Experts Language Models with Offloading
Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023. Du, N., Huang, Y., Dai, A. M., Tong, S., Lepikhin, D., Xu, Y., Krikun, M., Zhou, Y., Yu, A. W., Firat, O., Zoph, B., Fedus, L., Bosma, M., Zhou, Z., Wang, T., Wang, Y. E., Webster, K., Pellat, M., Robinson, K., Meier-Hellstern, K., Duke, T., Dixon, L., Zhang, K., Le, Q. V., Wu, Y., Chen, Z., and Cui, C.
2312.17238#22
2312.17238#24
2312.17238
[ "2302.13971" ]
2312.17238#24
Fast Inference of Mixture-of-Experts Language Models with Offloading
Glam: Efficient scaling of language models with mixture-of-experts, 2022. Fedus, W., Zoph, B., and Shazeer, N. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021. 8 Frantar, E. and Alistarh, D. SparseGPT: Massive language models can be accurately pruned in one-shot. arXiv preprint arXiv:2301.00774, 2023a.
2312.17238#23
2312.17238#25
2312.17238
[ "2302.13971" ]
2312.17238#25
Fast Inference of Mixture-of-Experts Language Models with Offloading
Frantar, E. and Alistarh, D. Qmoe: Practical sub-1-bit compression of trillion-parameter models, 2023b. Frantar, E., Ashkboos, S., Hoefler, T., and Alistarh, D. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., and Keutzer, K. A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630, 2021.
2312.17238#24
2312.17238#26
2312.17238
[ "2302.13971" ]
2312.17238#26
Fast Inference of Mixture-of-Experts Language Models with Offloading
Google. Google colaboratory, 2023. URL https://colab.research.google.com/. Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021. Hsu, Y.-C., Hua, T., Chang, S., Lou, Q., Shen, Y., and Jin, H. Language model compression with weighted low-rank factorization. arXiv preprint arXiv:2207.00112, 2022. Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. Adaptive mixtures of local experts. Neural Computation, 3(1):79â 87, March 1991. ISSN 0899-7667. doi: 10.1162/neco.1991.3.1.79. URL https://doi.org/10.1162/neco.1991.3.1.79. Jordan, M. I. and Jacobs, R. A. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181â 214, 1994. Kim, Y. J., Fahim, R., and Awadalla, H. H. Mixture of quantized experts (moqe): Complementary effect of low-bit quantization and robustness, 2023. Köpf, A., Kilcher, Y., von Rütte, D., Anagnostidis, S., Tam, Z.-R., Stevens, K., Barhoum, A., Duc, N. M., Stanley, O., Nagyfi, R., ES, S., Suri, S., Glushkov, D., Dantuluri, A., Maguire, A., Schuhmann, C., Nguyen, H., and Mattick, A.
2312.17238#25
2312.17238#27
2312.17238
[ "2302.13971" ]
2312.17238#27
Fast Inference of Mixture-of-Experts Language Models with Offloading
Openassistant conversations â democratizing large language model alignment, 2023. Lample, G., Sablayrolles, A., Ranzato, M. A., Denoyer, L., and Jegou, H. Large memory layers with product keys. In Wallach, H., Larochelle, H., Beygelzimer, A., dà lché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8546â
2312.17238#26
2312.17238#28
2312.17238
[ "2302.13971" ]
2312.17238#28
Fast Inference of Mixture-of-Experts Language Models with Offloading
8557. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/9061-large-memory-layers- with-product-keys.pdf. Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020. Lewis, M., Bhosale, S., Dettmers, T., Goyal, N., and Zettlemoyer, L. Base layers: Simplifying training of large, sparse models. arXiv preprint arXiv:2103.16716, 2021. Liang, T., Glossner, J., Wang, L., and Shi, S. Pruning and quantization for deep neural network accel- eration:
2312.17238#27
2312.17238#29
2312.17238
[ "2302.13971" ]
2312.17238#29
Fast Inference of Mixture-of-Experts Language Models with Offloading
A survey. CoRR, abs/2101.09671, 2021. URL https://arxiv.org/abs/2101.09671. Lin, J., Tang, J., Tang, H., Yang, S., Dang, X., and Han, S. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023. Ma, X., Fang, G., and Wang, X. Llm-pruner: On the structural pruning of large language models, 2023. Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Mixtral AI team. Mixtral of experts a high quality sparse mixture of experts, 2023. URL https: //mistral.ai/news/mixtral-of-experts/.
2312.17238#28
2312.17238#30
2312.17238
[ "2302.13971" ]
2312.17238#30
Fast Inference of Mixture-of-Experts Language Models with Offloading
9 Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., and Blankevoort, T. Up or down? Adaptive rounding for post-training quantization. In International Conference on Machine Learning (ICML), 2020. OpenAI. Gpt-4 technical report. arXiv, 2023. Pudipeddi, B., Mesmakhosroshahi, M., Xi, J., and Bharadwaj, S.
2312.17238#29
2312.17238#31
2312.17238
[ "2302.13971" ]
2312.17238#31
Fast Inference of Mixture-of-Experts Language Models with Offloading
Training large neural networks with constant memory using a new execution algorithm. CoRR, abs/2002.05645, 2020. URL https://arxiv.org/abs/2002.05645. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1â
2312.17238#30
2312.17238#32
2312.17238
[ "2302.13971" ]
2312.17238#32
Fast Inference of Mixture-of-Experts Language Models with Offloading
67, 2020. Ren, J., Rajbhandari, S., Aminabadi, R. Y., Ruwase, O., Yang, S., Zhang, M., Li, D., and He, Y. Zero-offload: Democratizing billion-scale model training. CoRR, abs/2101.06840, 2021. URL https://arxiv.org/abs/2101.06840. Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ili´c, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., et al.
2312.17238#31
2312.17238#33
2312.17238
[ "2302.13971" ]
2312.17238#33
Fast Inference of Mixture-of-Experts Language Models with Offloading
Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. Shahbaba, B. and Neal, R. Nonlinear models using dirichlet process mixtures. Journal of Machine Learning Research, 10(Aug):1829â 1850, 2009. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and Dean, J. Outra- geously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. Sheng, Y., Zheng, L., Yuan, B., Li, Z., Ryabinin, M., Chen, B., Liang, P., Ré, C., Stoica, I., and Zhang, C. Flexgen:
2312.17238#32
2312.17238#34
2312.17238
[ "2302.13971" ]
2312.17238#34
Fast Inference of Mixture-of-Experts Language Models with Offloading
High-throughput generative inference of large language models with a single gpu. In International Conference on Machine Learning, pp. 31094â 31116. PMLR, 2023. Steam. Steam hardware & software survey: October 2023, accessed on 2023.11.02, 2023. URL https://store.steampowered.com/hwsurvey/videocard/. Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, K., Silver, D., Petrov, S., Johnson, M., Antonoglou, I., Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., Lillicrap, T., Lazaridou, A., Firat, O., Molloy, J., Isard, M., Barham, P. R., Hennigan, T., Lee, B., Viola, F., Reynolds, M., Xu, Y., Doherty, R., Collins, E., Meyer, C., Rutherford, E., Moreira, E., Ayoub, K., Goel, M., Tucker, G., Piqueras, E., Krikun, M., Barr, I., Savinov, N., Danihelka, I., Roelofs, B., White, A., Andreassen, A., von Glehn, T., Yagati, L., Kazemi, M., Gonzalez, L., Khalman, M., Sygnowski, J., Frechette, A., Smith, C., Culp, L., Proleev, L., Luan, Y., Chen, X., Lottes, J., Schucher, N., Lebron, F., Rrustemi, A., Clay, N., Crone, P., Kocisky, T., Zhao, J., Perz, B., Yu, D., Howard, H., Bloniarz, A., Rae, J.
2312.17238#33
2312.17238#35
2312.17238
[ "2302.13971" ]
2312.17238#35
Fast Inference of Mixture-of-Experts Language Models with Offloading
W., Lu, H., Sifre, L., Maggioni, M., Alcober, F., Garrette, D., Barnes, M., Thakoor, S., Austin, J., Barth-Maron, G., Wong, W., Joshi, R., Chaabouni, R., Fatiha, D., Ahuja, A., Liu, R., Li, Y., Cogan, S., Chen, J., Jia, C., Gu, C., Zhang, Q., Grimstad, J., Hartman, A. J., Chadwick, M., Tomar, G. S., Garcia, X., Senter, E., Taropa, E., Pillai, T. S., Devlin, J., Laskin, M., de Las Casas, D., Valter, D., Tao, C., Blanco, L., Badia, A. P., Reitter, D., Chen, M., Brennan, J., Rivera, C., Brin, S., Iqbal, S., Surita, G., Labanowski, J., Rao, A., Winkler, S., Parisotto, E., Gu, Y., Olszewska, K., Zhang, Y., Addanki, R., Miech, A., Louis, A., Shafey, L. E., Teplyashin, D., Brown, G., Catt, E., Attaluri, N., Balaguer, J., Xiang, J., Wang, P., Ashwood, Z., Briukhov, A., Webson, A., Ganapathy, S., Sanghavi, S., Kannan, A., Chang, M.-W., Stjerngren, A., Djolonga, J., Sun, Y., Bapna, A., Aitchison, M., Pejman, P., Michalewski, H., Yu, T., Wang, C., Love, J., Ahn, J., Bloxwich, D., Han, K., Humphreys, P., Sellam, T., Bradbury, J., Godbole, V., Samangooei, S., Damoc, B., Kaskasoli, A., Arnold, S. M.
2312.17238#34
2312.17238#36
2312.17238
[ "2302.13971" ]
2312.17238#36
Fast Inference of Mixture-of-Experts Language Models with Offloading
R., Vasudevan, V., Agrawal, S., Riesa, J., Lepikhin, D., Tanburn, R., Srinivasan, S., Lim, H., Hodkinson, S., Shyam, P., Ferret, J., Hand, S., Garg, A., Paine, T. L., Li, J., Li, Y., Giang, M., Neitz, A., Abbas, Z., York, S., Reid, M., Cole, E., Chowdhery, A., Das, D., Rogozi´nska, D., Nikolaev, V., Sprechmann, P., Nado, Z., Zilka, L., Prost, F., He, L., Monteiro, M., Mishra, G., Welty, C., Newlan, J., Jia, D., Allamanis, M., Hu, C. H., de Liedekerke, R., Gilmer, J., Saroufim, C., Rijhwani, S., Hou, S., Shrivastava, D., Baddepudi, A., Goldin, A., Ozturel, A., Cassirer, A., Xu, Y., Sohn,
2312.17238#35
2312.17238#37
2312.17238
[ "2302.13971" ]
2312.17238#37
Fast Inference of Mixture-of-Experts Language Models with Offloading
10 D., Sachan, D., Amplayo, R. K., Swanson, C., Petrova, D., Narayan, S., Guez, A., Brahma, S., Landon, J., Patel, M., Zhao, R., Villela, K., Wang, L., Jia, W., Rahtz, M., Giménez, M., Yeung, L., Lin, H., Keeling, J., Georgiev, P., Mincu, D., Wu, B., Haykal, S., Saputro, R., Vodrahalli, K., Qin, J., Cankara, Z., Sharma, A., Fernando, N., Hawkins, W., Neyshabur, B., Kim, S., Hutter, A., Agrawal, P., Castro-Ros, A., van den Driessche, G., Wang, T., Yang, F., yiin Chang, S., Komarek, P., McIlroy, R., LuË ci´c, M., Zhang, G., Farhan, W., Sharman, M., Natsev, P., Michel, P., Cheng, Y., Bansal, Y., Qiao, S., Cao, K., Shakeri, S., Butterfield, C., Chung, J., Rubenstein, P.
2312.17238#36
2312.17238#38
2312.17238
[ "2302.13971" ]
2312.17238#38
Fast Inference of Mixture-of-Experts Language Models with Offloading
K., Agrawal, S., Mensch, A., Soparkar, K., Lenc, K., Chung, T., Pope, A., Maggiore, L., Kay, J., Jhakra, P., Wang, S., Maynez, J., Phuong, M., Tobin, T., Tacchetti, A., Trebacz, M., Robinson, K., Katariya, Y., Riedel, S., Bailey, P., Xiao, K., Ghelani, N., Aroyo, L., Slone, A., Houlsby, N., Xiong, X., Yang, Z., Gribovskaya, E., Adler, J., Wirth, M., Lee, L., Li, M., Kagohara, T., Pavagadhi, J., Bridgers, S., Bortsova, A., Ghemawat, S., Ahmed, Z., Liu, T., Powell, R., Bolina, V., Iinuma, M., Zablotskaia, P., Besley, J., Chung, D.-W., Dozat, T., Comanescu, R., Si, X., Greer, J., Su, G., Polacek, M., Kaufman, R. L., Tokumine, S., Hu, H., Buchatskaya, E., Miao, Y., Elhawaty, M., Siddhant, A., Tomasev, N., Xing, J., Greer, C., Miller, H., Ashraf, S., Roy, A., Zhang, Z., Ma, A., Filos, A., Besta, M., Blevins, R., Klimenko, T., Yeh, C.-K., Changpinyo, S., Mu, J., Chang, O., Pajarskas, M., Muir, C., Cohen, V., Lan, C. L., Haridasan, K., Marathe, A., Hansen, S., Douglas, S., Samuel, R., Wang, M., Austin, S., Lan, C., Jiang, J., Chiu, J., Lorenzo, J. A., Sjösund, L.
2312.17238#37
2312.17238#39
2312.17238
[ "2302.13971" ]
2312.17238#39
Fast Inference of Mixture-of-Experts Language Models with Offloading
L., Cevey, S., Gleicher, Z., Avrahami, T., Boral, A., Srinivasan, H., Selo, V., May, R., Aisopos, K., Hussenot, L., Soares, L. B., Baumli, K., Chang, M. B., Recasens, A., Caine, B., Pritzel, A., Pavetic, F., Pardo, F., Gergely, A., Frye, J., Ramasesh, V., Horgan, D., Badola, K., Kassner, N., Roy, S., Dyer, E., Campos, V., Tomala, A., Tang, Y., Badawy, D. E., White, E., Mustafa, B., Lang, O., Jindal, A., Vikram, S., Gong, Z., Caelles, S., Hemsley, R., Thornton, G., Feng, F., Stokowiec, W., Zheng, C., Thacker, P., à aË glar à nlü, Zhang, Z., Saleh, M., Svensson, J., Bileschi, M., Patil, P., Anand, A., Ring, R., Tsihlas, K., Vezer, A., Selvi, M., Shevlane, T., Rodriguez, M., Kwiatkowski, T., Daruki, S., Rong, K., Dafoe, A., FitzGerald, N., Gu-Lemberg, K., Khan, M., Hendricks, L. A., Pellat, M., Feinberg, V., Cobon-Kerr, J., Sainath, T., Rauh, M., Hashemi, S.
2312.17238#38
2312.17238#40
2312.17238
[ "2302.13971" ]
2312.17238#40
Fast Inference of Mixture-of-Experts Language Models with Offloading
H., Ives, R., Hasson, Y., Li, Y., Noland, E., Cao, Y., Byrd, N., Hou, L., Wang, Q., Sottiaux, T., Paganini, M., Lespiau, J.-B., Moufarek, A., Hassan, S., Shivakumar, K., van Amersfoort, J., Mandhane, A., Joshi, P., Goyal, A., Tung, M., Brock, A., Sheahan, H., Misra, V., Li, C., Raki´cevi´c, N., Dehghani, M., Liu, F., Mittal, S., Oh, J., Noury, S., Sezener, E., Huot, F., Lamm, M., Cao, N. D., Chen, C., Elsayed, G., Chi, E., Mahdieh, M., Tenney, I., Hua, N., Petrychenko, I., Kane, P., Scandinaro, D., Jain, R., Uesato, J., Datta, R., Sadovsky, A., Bunyan, O., Rabiej, D., Wu, S., Zhang, J., Vasudevan, G., Leurent, E., Alnahlawi, M., Georgescu, I., Wei, N., Zheng, I., Chan, B., Rabinovitch, P. G., Stanczyk, P., Zhang, Y., Steiner, D., Naskar, S., Azzam, M., Johnson, M., Paszke, A., Chiu, C.-C., Elias, J.
2312.17238#39
2312.17238#41
2312.17238
[ "2302.13971" ]
2312.17238#41
Fast Inference of Mixture-of-Experts Language Models with Offloading
S., Mohiuddin, A., Muhammad, F., Miao, J., Lee, A., Vieillard, N., Potluri, S., Park, J., Davoodi, E., Zhang, J., Stanway, J., Garmon, D., Karmarkar, A., Dong, Z., Lee, J., Kumar, A., Zhou, L., Evens, J., Isaac, W., Chen, Z., Jia, J., Levskaya, A., Zhu, Z., Gorgolewski, C., Grabowski, P., Mao, Y., Magni, A., Yao, K., Snaider, J., Casagrande, N., Suganthan, P., Palmer, E., Irving, G., Loper, E., Faruqui, M., Arkatkar, I., Chen, N., Shafran, I., Fink, M., Castaño, A., Giannoumis, I., Kim, W., Rybi´nski, M., Sreevatsa, A., Prendki, J., Soergel, D., Goedeckemeyer, A., Gierke, W., Jafari, M., Gaba, M., Wiesner, J., Wright, D. G., Wei, Y., Vashisht, H., Kulizhskaya, Y., Hoover, J., Le, M., Li, L., Iwuanyanwu, C., Liu, L., Ramirez, K., Khorlin, A., Cui, A., LIN, T., Georgiev, M., Wu, M., Aguilar, R., Pallo, K., Chakladar, A., Repina, A., Wu, X., van der Weide, T., Ponnapalli, P., Kaplan, C., Simsa, J., Li, S., Dousse, O., Yang, F., Piper, J., Ie, N., Lui, M., Pasumarthi, R., Lintz, N., Vijayakumar, A., Thiet, L. N., Andor, D., Valenzuela, P., Paduraru, C., Peng, D., Lee, K., Zhang, S., Greene, S., Nguyen, D.
2312.17238#40
2312.17238#42
2312.17238
[ "2302.13971" ]
2312.17238#42
Fast Inference of Mixture-of-Experts Language Models with Offloading
D., Kurylowicz, P., Velury, S., Krause, S., Hardin, C., Dixon, L., Janzer, L., Choo, K., Feng, Z., Zhang, B., Singhal, A., Latkar, T., Zhang, M., Le, Q., Abellan, E. A., Du, D., McKinnon, D., Antropova, N., Bolukbasi, T., Keller, O., Reid, D., Finchelstein, D., Raad, M. A., Crocker, R., Hawkins, P., Dadashi, R., Gaffney, C., Lall, S., Franko, K., Filonov, E., Bulanova, A., Leblond, R., Yadav, V., Chung, S., Askham, H., Cobo, L. C., Xu, K., Fischer, F., Xu, J., Sorokin, C., Alberti, C., Lin, C.-C., Evans, C., Zhou, H., Dimitriev, A., Forbes, H., Banarse, D., Tung, Z., Liu, J., Omernick, M., Bishop, C., Kumar, C., Sterneck, R., Foley, R., Jain, R., Mishra, S., Xia, J., Bos, T., Cideron, G., Amid, E., Piccinno, F., Wang, X., Banzal, P., Gurita, P., Noga, H., Shah, P., Mankowitz, D.
2312.17238#41
2312.17238#43
2312.17238
[ "2302.13971" ]
2312.17238#43
Fast Inference of Mixture-of-Experts Language Models with Offloading
J., Polozov, A., Kushman, N., Krakovna, V., Brown, S., Bateni, M., Duan, D., Firoiu, V., Thotakuri, M., Natan, T., Mohananey, A., Geist, M., Mudgal, S., Girgin, S., Li, H., Ye, J., Roval, O., Tojo, R., Kwong, M., Lee-Thorp, J., Yew, C., Yuan, Q., Bagri, S., Sinopalnikov, D., Ramos, S., Mellor, J., Sharma, A., Severyn, A., Lai, J., Wu, K., Cheng, H.-T., Miller, D., Sonnerat, N., Vnukov, D., Greig, R., Beattie, J., Caveness, E., Bai, L., Eisenschlos, J., Korchemniy, A., Tsai, T., Jasarevic, 11 M., Kong, W., Dao, P., Zheng, Z., Liu, F., Yang, F., Zhu, R., Geller, M., Teh, T.
2312.17238#42
2312.17238#44
2312.17238
[ "2302.13971" ]
2312.17238#44
Fast Inference of Mixture-of-Experts Language Models with Offloading
H., Sanmiya, J., Gladchenko, E., Trdin, N., Sozanschi, A., Toyama, D., Rosen, E., Tavakkol, S., Xue, L., Elkind, C., Woodman, O., Carpenter, J., Papamakarios, G., Kemp, R., Kafle, S., Grunina, T., Sinha, R., Talbert, A., Goyal, A., Wu, D., Owusu-Afriyie, D., Du, C., Thornton, C., Pont-Tuset, J., Narayana, P., Li, J., Fatehi, S., Wieting, J., Ajmeri, O., Uria, B., Zhu, T., Ko, Y., Knight, L., Héliou, A., Niu, N., Gu, S., Pang, C., Tran, D., Li, Y., Levine, N., Stolovich, A., Kalb, N., Santamaria-Fernandez, R., Goenka, S., Yustalim, W., Strudel, R., Elqursh, A., Lakshminarayanan, B., Deck, C., Upadhyay, S., Lee, H., Dusenberry, M., Li, Z., Wang, X., Levin, K., Hoffmann, R., Holtmann-Rice, D., Bachem, O., Yue, S., Arora, S., Malmi, E., Mirylenka, D., Tan, Q., Koh, C., Yeganeh, S. H., Põder, S., Zheng, S., Pongetti, F., Tariq, M., Sun, Y., Ionita, L., Seyedhosseini, M., Tafti, P., Kotikalapudi, R., Liu, Z., Gulati, A., Liu, J., Ye, X., Chrzaszcz, B., Wang, L., Sethi, N., Li, T., Brown, B., Singh, S., Fan, W., Parisi, A., Stanton, J., Kuang, C., Koverkathu, V., Choquette-Choo, C.
2312.17238#43
2312.17238#45
2312.17238
[ "2302.13971" ]
2312.17238#45
Fast Inference of Mixture-of-Experts Language Models with Offloading
A., Li, Y., Lu, T., Ittycheriah, A., Shroff, P., Sun, P., Varadarajan, M., Bahargam, S., Willoughby, R., Gaddy, D., Dasgupta, I., Desjardins, G., Cornero, M., Robenek, B., Mittal, B., Albrecht, B., Shenoy, A., Moiseev, F., Jacobsson, H., Ghaffarkhah, A., Rivière, M., Walton, A., Crepy, C., Parrish, A., Liu, Y., Zhou, Z., Farabet, C., Radebaugh, C., Srinivasan, P., van der Salm, C., Fidjeland, A., Scellato, S., Latorre-Chimoto, E., Klimczak-Pluci´nska, H., Bridson, D., de Cesare, D., Hudson, T., Mendolicchio, P., Walker, L., Morris, A., Penchev, I., Mauger, M., Guseynov, A., Reid, A., Odoom, S., Loher, L., Cotruta, V., Yenugula, M., Grewe, D., Petrushkina, A., Duerig, T., Sanchez, A., Yadlowsky, S., Shen, A., Globerson, A., Kurzrok, A., Webb, L., Dua, S., Li, D., Lahoti, P., Bhupatiraju, S., Hurt, D., Qureshi, H., Agarwal, A., Shani, T., Eyal, M., Khare, A., Belle, S. R., Wang, L., Tekur, C., Kale, M. S., Wei, J., Sang, R., Saeta, B., Liechty, T., Sun, Y., Zhao, Y., Lee, S., Nayak, P., Fritz, D., Vuyyuru, M.
2312.17238#44
2312.17238#46
2312.17238
[ "2302.13971" ]
2312.17238#46
Fast Inference of Mixture-of-Experts Language Models with Offloading
R., Aslanides, J., Vyas, N., Wicke, M., Ma, X., Bilal, T., Eltyshev, E., Balle, D., Martin, N., Cate, H., Manyika, J., Amiri, K., Kim, Y., Xiong, X., Kang, K., Luisier, F., Tripuraneni, N., Madras, D., Guo, M., Waters, A., Wang, O., Ainslie, J., Baldridge, J., Zhang, H., Pruthi, G., Bauer, J., Yang, F., Mansour, R., Gelman, J., Xu, Y., Polovets, G., Liu, J., Cai, H., Chen, W., Sheng, X., Xue, E., Ozair, S., Yu, A., Angermueller, C., Li, X., Wang, W., Wiesinger, J., Koukoumidis, E., Tian, Y., Iyer, A., Gurumurthy, M., Goldenson, M., Shah, P., Blake, M., Yu, H., Urbanowicz, A., Palomaki, J., Fernando, C., Brooks, K., Durden, K., Mehta, H., Momchev, N., Rahimtoroghi, E., Georgaki, M., Raul, A., Ruder, S., Redshaw, M., Lee, J., Jalan, K., Li, D., Perng, G., Hechtman, B., Schuh, P., Nasr, M., Chen, M., Milan, K., Mikulik, V., Strohman, T., Franco, J., Green, T., Hassabis, D., Kavukcuoglu, K., Dean, J., and Vinyals, O. Gemini: A family of highly capable multimodal models, 2023. TII UAE. The Falcon family of large language models. https://huggingface.co/tiiuae/ falcon-40b, May 2023.
2312.17238#45
2312.17238#47
2312.17238
[ "2302.13971" ]
2312.17238#47
Fast Inference of Mixture-of-Experts Language Models with Offloading
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
2312.17238#46
2312.17238#48
2312.17238
[ "2302.13971" ]
2312.17238#48
Fast Inference of Mixture-of-Experts Language Models with Offloading
12
2312.17238#47
2312.17238
[ "2302.13971" ]
2312.11111#0
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
3 2 0 2 c e D 9 1 ] I A . s c [ 2 v 1 1 1 1 1 . 2 1 3 2 : v i X r a # The Good, The Bad, and Why: Unveiling Emotions in Generative AI* Cheng Li1,2, Jindong Wang1â , Yixuan Zhang3, Kaijie Zhu1, Xinyi Wang4, Wenxin Hou1, Jianxun Lian1, Fang Luo4, Qiang Yang5, Xing Xie1 1Microsoft Research 2Institute of Software, CAS 3William&Mary 4Beijing Normal University 5Hong Kong University of Science and Technology # Abstract
2312.11111#1
2312.11111
[ "2210.09261" ]
2312.11111#1
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Emotion significantly impacts our daily behaviors and interactions. While recent genera- tive AI models, such as large language models, have shown impressive performance in various tasks, it remains unclear whether they truly comprehend emotions. This paper aims to address this gap by incorporating psychological theories to gain a holistic understanding of emotions in generative AI models. Specifically, we propose three approaches: 1) EmotionPrompt 24 to enhance AI model performance, 2) EmotionAttack to impair AI model performance, and 3) EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Through extensive experiments involving language and multi-modal models on semantic un- derstanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hin- der it. Additionally, EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain. Our work heralds a novel avenue for exploring psychology to enhance our understanding of generative AI models.
2312.11111#0
2312.11111#2
2312.11111
[ "2210.09261" ]
2312.11111#2
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
1 # Introduction Emotion is a multifaceted psychological and physiological phenomenon that encompasses sub- jective feelings, physiological responses, and behavioral expressions 23. Emotions manifest through a confluence of reflexes, perception, cognition, and behavior, all of which are subject to modulation by a range of internal and external determinants 41;40. For instance, in decision- making, emotions emerge as powerful, ubiquitous, and consistent influencers that can swing from beneficial to detrimental 22. Studies further underscore the importance of emotions in steering attention 34, academia 38, and competitive sports 21. The recently emerging large language and multi-modal models have shown remarkable performance in a wide spectrum of tasks, such as semantic understanding, logical reasoning, *This paper is an extension of our previous EmotionPrompt 24. We extended it to the visual domain and proposed EmotionAttack and EmotionDecode, two new approaches for attacking AI models and understanding how emotion works, respectively.
2312.11111#1
2312.11111#3
2312.11111
[ "2210.09261" ]
2312.11111#3
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
â Corresponding author: Jindong Wang. Email: jindong.wang@microsoft.com. Address: No.5 Danling Street, Haidian District, Beijing, China, 100080. 1 (a) EmotionPrompt and EmotionAttack impact the performance of AI models Original 1. Sum the two given numbers prompt 2- Determine whether a movie review is positive or negative + nn a Textual EmotionPrompt Visual EmotionPrompt Textual EmotionAttack Visual EmotionAttack Sef. This is monitoring very import eau Your friend | 2 ant to m i i y re 4 Bsa Bob is sick. | |" NOM Social CA*EEX- Happiness Sad Cognitive Sexyman Money Fortress ; Theory Are you sure? || ie p I Maslowâ s e â s | Hierarchy Heightened 2 baby is - Maslow's ES of Needs Emotional Â¥ L fax hierarchy You're safe. â Arousal CZÂ¥29G Sadly-| | pisgust â Anger Surprise of need Sexy woman Honor veseee Heightened Emotional Arousal â ,â a Fear Performance improvement Performance decrement (b) EmotionDecode finds brain reward pathway and â dopamineâ of generative AI models _o--- llamadoagneVerpr ae EPO! i [/ Embeddingot isefuncRORaggi... EP02 |_| AI >! Ebola 00 models | udesktopDirEAtjE ee â e AtionpoliticianR Performance EmotionPrompt Mean Embedding EAnaspanyConstal change Decoding bumestyument... AI models â Metaâ EmotionPrompt â dopamineâ
2312.11111#2
2312.11111#4
2312.11111
[ "2210.09261" ]
2312.11111#4
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
inside AI models Figure 1: An overview of our research on unveiling emotions in generative AI models. (a) We proposed EmotionPrompt and EmotionAttack to increase and impair AI model performance, re- spectively. (b) We designed EmotionDecode to explain how emotional prompts work in AI models. and open-ended generation 7;47. As advanced AI models become more predominant in every- day life, ranging from communication and education to economics, it is urgent to understand if they can perceive emotions well to enable better human-AI collaboration. However, the extent to which these models can comprehend emotion, a distinct human advantage, is still largely unknown. And yet, examining the emotion of AI models is essential to ensure their effective and ethical integration into society. Neglecting this aspect risks creating AI systems that lack empathy and understanding in human interactions, leading to potential miscommunications and ethical challenges. Understanding modelsâ emotional capabilities is crucial for developing more advanced, empathetic AI systems, and fostering trust and acceptance in their real-world applications. Without this focus, we risk missing out on the full potential of AI to enhance and complement human experiences. In this paper, we took the first step towards unveiling the emotions in AI models by lever- aging psychological theories. Specifically, we devised EmotionPrompt and EmotionAttack, which are textual 24 and visual emotional stimuli acting as additional prompts to the models, as shown in Fig. 1(a). EmotionPrompt was grounded in psychological frameworks, includ- ing self-monitoring 18, social cognitive theory 14;29, and Maslowâ s hierarchy of needs 31. These theories have been proven to enhance human task performance. Conversely, EmotionAttack draws inspiration from some empirical studies to obtain insights into emotionally related fac-
2312.11111#3
2312.11111#5
2312.11111
[ "2210.09261" ]
2312.11111#5
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
2 tors that demonstrate how emotions can impede human problem-solving, such as negative life events 13 and emotional arousal 39;12. Moreover, we introduced EmotionDecode to illuminate the effectiveness of emotional stimuli in AI models. As depicted in Fig. 1(b), EmotionDecode unravels the knowledge representation in AI models, interpreting the impact of emotional stim- uli through the lenses of neuroscience and psychology. At the methodology level, we designed 21 textual EmotionPrompt which can be directly appended to the original prompts. Then, for visual EmotionPrompt, we collected 5 types of images containing different level needs from the most basic to the highest-order needs. For each type, we collected 5 different images which are visual prompts appended to the original text prompts. Similarly, we designed 36 textual EmotionAttack containing texts acting as at- tackers to AI models where we designed 4 types of attacks, including sentence-level zero-shot, sentence-level few-shot, word-level zero-shot, and word-level few-shot attacks. For visual EmotionAttack, we created 6 types of heightened emotional arousal levels images including: â happinessâ , â sadnessâ , â fearâ , â disgustâ , â angerâ , and â surpriseâ .
2312.11111#4
2312.11111#6
2312.11111
[ "2210.09261" ]
2312.11111#6
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Each type contains 5 dif- ferent images that append the original textual prompts in multi-modal models. Note that all visual prompts have their mirror in the textual prompts, but not vice versa. This is due to the fact that some high-level texts cannot be visualized. We conducted extensive experiments using both open-sourced and proprietary AI models on three types of representative evaluation tasks: semantic understanding, logical reasoning, and open-ended generation. Specifically, we adopted 50 tasks from two popular datasets, in- cluding Instruction Induction 17 and BIG-Bench-Hard 44 to evaluate semantic understanding and logical reasoning abilities, leading to 940, 200 evaluations. We further conducted a human- subjects study with 106 participants to evaluate 30 open-ended questions. These tasks lacked standard automated evaluation methods. Our evaluation results show that EmotionPrompt can successfully enhance the performance of AI models on both semantic understanding and log- ical reasoning tasks, while EmotionAttack can impede the performance. As for generation, most participants reported satisfied results in performance, truthfulness, and responsibility with EmotionPrompt compared to the vanilla prompts. By decoding the mean embedding of emotional prompts, we successfully triggered the â dopamineâ inside AI models, which is analogous to the dopamine in the human brain that simulates performance. Then, we visu- alized the attention map of different emotional stimuli to observe the effects on the modelâ s attention weights.
2312.11111#5
2312.11111#7
2312.11111
[ "2210.09261" ]
2312.11111#7
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
To conclude, this paper makes the following contributions: 1. Theory-driven Method in Understanding the Emotional aspect of LLMs: We present EmotionPrompt and EmotionAttack grounded in psychological theories to comprehen- sively assess the emotions of AI models. Our study demonstrates that AI models can understand and significantly benefit from integrating emotional stimuli (i.e., various in- ternal and external factors that can evoke emotional responses). 2. Comprehensive Experiments with Automated Tests and Human-subject Studies: Our research spans a broad spectrum of experiments, including a variety of tasks, eval- uated using standard automated methods and enriched with human studies. This dual approach underscores the notable improvements in task performance, truthfulness, and informativeness brought. 3. In-depth Analytical Insights: We conducted a detailed analysis of the underlying prin- ciples of our approach via our proposed method EmotionDecode. This exploration pro- vides valuable insights, contributing to both the fields of artificial intelligence and social sciences, and highlights the broader implications of our findings.
2312.11111#6
2312.11111#8
2312.11111
[ "2210.09261" ]
2312.11111#8
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
3 (a) Performance change by EmotionPrompt (>0) and EmotionAttack (<0) with human study. Semantic understanding Semantic understanding Logical reasoning Logical reasoning Generation (Text) (Image) (Text) (Image) (Human study, GPT-4) S 60 S 19 . 2 1 ? 20 t { t + a ¢ 0 4 A . $ 4 = 0-â * ; ¢ t 4 2 -20 } $ 2 40 ' : 1 I ¢ S -60 ? = -80 I L iS s Ao ot NG Db aw S RS ot » ce eS ats SRP hh Vth SP aR eT hh ASP IP at ang sf ys ox G Vv v cok o* G v » cob go gat (b) EmotionDecode finds the "dopamine" inside AI models via representation decoding.
2312.11111#7
2312.11111#9
2312.11111
[ "2210.09261" ]
2312.11111#9
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
EmotionDecode (EmotionPrompt) EmotionDecode (EmotionAttack) EmotionDecode (Neutral stimuli) 10 sa £09 OT .09 .08 .09 .09 |.08 .08 .08 .09 .09 .09: sa oad 08 .08 .09 .10 .10 A oa 209.08 .08 .08 .08 .08 |.07 .08 .08 .09 .09 .10/ Los. os ss la la 106 206 +00 109.08 .08 .09 .03 .08 .08 .08 .08 .08 .09 .09- sum Llama-2 sum = 0.08 soa sw sw LO8 07 .03 .08 .03 .08 .08 .08 09 .09 10 108 O°® a oR ~ a . & 108 .08 .08 .08 .08 .08 .08 08 .08 .08 .08 .08- we we cs cs 0.04 00 10 88 86 70 90 7 sa 09 los la = 0.08 Bos 8 0} 08 .08 .08 .08 .08 .08 .08 .08 .08 .08: sum GPT-4 (Transferability) z ia 06 sor = 08 .08 09 .08 .08 .09 .08 .08 .08 .08 |.09- = 0s 6.06 7 G 409 .08 .08 .08 .08 .09 .08 .08 .09 .08 .08 .09: g Qe aN a as a as, nS â a a oh ad ae oar ia GP eH GHP eH?â oh ia ROSIER SMO SCO ia ROSEN SARC SECS Ni ss 8 8 es sa Ni ss 8 8 es sa Ni ss yy es sa
2312.11111#8
2312.11111#10
2312.11111
[ "2210.09261" ]
2312.11111#10
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Figure 2: (a) The main results of textual and visual EmotionPrompt and EmotionAttack on gener- ative AI models. (b) Results of EmotionDecode. The color represents the performance of stimulus on diverse tasks across Llama-2 and GPT-4. Red means better performance, while blue means weaker performance. 4 # 2 Results # 2.1 The benign and malignant effects of emotional stimuli on AI models Our main results are provided in Fig. 2, where the evaluation is conducted on Instruction Induc- tion 17 and BIG-Bench-Hard 44 that represent a popular and diverse set of semantic understand- ing and reasoning tasks. In total, we conducted 940, 200 evaluations. Instruction Induction is designed to explore the ability of models to infer an underlying task from a few demonstra- tions, while BIG-Bench-Hard focuses on more challenging tasks. The detailed task descrip- tions are provided in Appendix A. Our human study evaluated 30 open-ended generation tasks and collected feedback from performance, truthfulness, and responsibility with more details at Appendix G. We adopted several popular AI models, ranging from Llama2 44, ChatGPT 35, GPT-4 37, to multi-modality models including LLaVa-13b 28, BLIP2 25, and CogVLM 46.1 We reported accuracy and normalized preferred metric2 as the evaluation metrics for Instruction Induction and BIG-Bench-Hard, respectively. Below are our key findings: 1.
2312.11111#9
2312.11111#11
2312.11111
[ "2210.09261" ]
2312.11111#11
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
Generative AI models understand and can be influenced by emotional stimuli. Emo- tionPrompt and EmotionAttack demonstrate consistent effectiveness in semantic under- standing and reasoning tasks. As shown in Fig. 2(a), the textual and visual Emotion- Prompt improve the semantic understanding performance by 13.88% and 16.79%, re- spectively, and improve the reasoning performance by 11.76% and 15.13%, respectively. In contrast, the textual and visual EmotionAttack impair the semantic understanding per- formance by 10.13% and 53.14%, respectively, and decrease the reasoning performance by 12.30% and 37.53%, respectively. 2. As for generation tasks, EmotionPrompt demonstrates consistent improvement in performance, truthfulness, and responsibility over most generative questions. As shown in Fig. 1(a), EmotionPrompt improves these metrics by 15%, 9%, and 9%, re- spectively. This verifies that emotional stimuli can also work in generative tasks. The detailed results can be found in Appendices B and C. 3. EmotionPrompt and EmotionAttack consistently demonstrate commendable effi- cacy across tasks varying difficulty as well as on diverse LLMs. BIG-Bench-Hard and Instruction Induction focus on tasks of different difficulties separately. Remark- ably, EmotionPrompt and EmotionAttack excel in evaluations across both benchmarks. Furthermore, the same theories can work in both textual and visual prompts, as shown in Appendix D. Our further experiments show that the improvements are larger when applied to in-context (few-shot) learning and prompt engineering techniques such as au- tomatic prompt engineering 50. 4. Multi-modal AI models are more sensitive to emotional stimuli than large language models. Our results show that image prompts are more effective than textual prompts (15.96% vs. 12.82% on EmotionPrompt and 45.34% vs. 11.22% on EmotionAttack). 1For ChatGPT, we utilize gpt-3.5-turbo (0613) and set temperature parameter to 0.7. For GPT-4 and Llama 2, we set the temperature to 0.7. The remaining LLMs are evaluated using their default settings. We did not use GPT-4Vision for image prompts due to the API limit by OpenAI.
2312.11111#10
2312.11111#12
2312.11111
[ "2210.09261" ]
2312.11111#12
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
2Under this metric, a score of 100 corresponds to human experts, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task. 5 Meanwhile, image prompts are more effective in impairing performance than textual prompts, indicating there is more room for improvement in multi-modal AI models. # 2.2 EmotionDecode uncovers the effectiveness of emotional stim- uli on AI models It is generally believed that large language and multi-modal models are trained on massive data that contains knowledge from textbooks and human conversations.
2312.11111#11
2312.11111#13
2312.11111
[ "2210.09261" ]

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
76
Add dataset card