Unnamed: 0
int64
0
770
text
stringlengths
197
4.97k
0
<s>[INST] What type of modeling has recently demonstrated remarkable capabilities in natural language processing tasks? [/INST] JOURNAL OF L ATEX 1 A Comprehensive Overview of Large Language Models Humza Naveed, Asad Ullah Khan*, Shi Qiu*, Muhammad Saqib*, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, Ajmal Mian Abstract — Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond </s>
1
<s>[INST] What topics are covered in these works? [/INST] These works encompass diverse topics such as architectural innovations of the underlying neural networks, context length improvements, model alignment, training datasets, benchmarking, efficiency and more </s>
2
<s>[INST] What topics do these works cover? [/INST] These works encompass diverse topics such as architectural innovations of the underlying neural networks, context length improvements, model alignment, training datasets, benchmarking, efficiency and more </s>
3
<s>[INST] What has made it difficult to see the bigger picture of advances in LLM research? [/INST] With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction </s>
4
<s>[INST] With the rapid development of techniques and regular breakthroughs, what has become difficult to perceive? [/INST] With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction </s>
5
<s>[INST] What is imperative that the research community is able to benefit from a comprehensive overview of the recent developments in this field? [/INST] Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field </s>
6
<s>[INST] What is the focus of the book? [/INST] It not only focuses on a systematic treatment of the existing literature on a broad range of LLM related concept, but also pays special attention to providing comprehensive summaries with extensive details about the individual existing models, datasets and major insights </s>
7
<s>[INST] What does the book provide? [/INST] It not only focuses on a systematic treatment of the existing literature on a broad range of LLM related concept, but also pays special attention to providing comprehensive summaries with extensive details about the individual existing models, datasets and major insights </s>
8
<s>[INST] What do we pay attention to by accounting for other recently materializing reviews of the broader research direction? [/INST] We also pay heed to aligning our overview with the emerging outlook of this research direction by accounting for the other recently materializing reviews of the broader research direction of LLMs </s>
9
<s>[INST] What does our comprehensive overview of LLMs cover? [/INST] Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of this research direction </s>
10
<s>[INST] What topics are covered in this comprehensive overview? [/INST] Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of this research direction </s>
11
<s>[INST] What is the purpose of this review article? [/INST] This review article is intended to not only provide a systematic survey, but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research direction </s>
12
<s>[INST] What is intended to provide a systematic survey? [/INST] This review article is intended to not only provide a systematic survey, but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research direction </s>
13
<s>[INST] What plays a fundamental role in facilitating commu- nication and self-expression? [/INST] I NTRODUCTION Language plays a fundamental role in facilitating commu- nication and self-expression for humans, and likewise, com- munication holds paramount importance for machines in their interactions with humans and other systems </s>
14
<s>[INST] What holds paramount importance for machines in their interactions with humans and other systems? [/INST] I NTRODUCTION Language plays a fundamental role in facilitating commu- nication and self-expression for humans, and likewise, com- munication holds paramount importance for machines in their interactions with humans and other systems </s>
15
<s>[INST] Large Language Models (LLMs) are cutting-edge artificial intelligence systems designed to process and generate what? [/INST] Large Language Models (LLMs) have emerged as cutting-edge artificial intel- ligence systems designed to process and generate text, aiming to communicate coherently [1] </s>
16
<s>[INST] The need for LLM stems from the growing demand for machines to handle what? [/INST] The need for LLMs stems from the growing demand for machines to handle complex lan- guage tasks, including translation, summarization, information retrieval, and conversational interactions </s>
17
<s>[INST] Translation, summarization, information retrieval, and conversational interactions are examples of what type of tasks? [/INST] The need for LLMs stems from the growing demand for machines to handle complex lan- guage tasks, including translation, summarization, information retrieval, and conversational interactions </s>
18
<s>[INST] What has led to significant breakthroughs in language models? [/INST] Recently, significant breakthroughs have been witnessed in language models, pri- marily attributed to deep learning techniques, advancements in Version: 01 (update on July 10, 2023) </s>
19
<s>[INST] What has been attributed to advances in deep learning techniques? [/INST] Recently, significant breakthroughs have been witnessed in language models, pri- marily attributed to deep learning techniques, advancements in Version: 01 (update on July 10, 2023) </s>
20
<s>[INST] When was Version: 01 updated? [/INST] Recently, significant breakthroughs have been witnessed in language models, pri- marily attributed to deep learning techniques, advancements in Version: 01 (update on July 10, 2023) </s>
21
<s>[INST] What are Large Language Models? [/INST] These developments have brought about a revolutionary transformation by enabling the creation of Large Language Models (LLMs) that can approximate human- level performance on certain evaluation benchmarks [3], [4] </s>
22
<s>[INST] What can LLMs approximate? [/INST] These developments have brought about a revolutionary transformation by enabling the creation of Large Language Models (LLMs) that can approximate human- level performance on certain evaluation benchmarks [3], [4] </s>
23
<s>[INST] Pre-trained language models (PLMs) have shown tremendous generalization abilities for what tasks? [/INST] LLMs, particularly pre-trained language models (PLM), have shown tremendous generalization abilities for text under- standing and generation tasks while trained in a self-supervised setting on a large corpus of text [5], [6], [7] </s>
24
<s>[INST] What have LLMs shown in a self-supervised setting? [/INST] LLMs, particularly pre-trained language models (PLM), have shown tremendous generalization abilities for text under- standing and generation tasks while trained in a self-supervised setting on a large corpus of text [5], [6], [7] </s>
25
<s>[INST] Pre-trained language models (PLMs) perform better when fine-tuned for what? [/INST] The performance of pre-trained language models (PLMs) improves significantly when fine-tuned for downstream tasks, surpassing the perfor- mance of models trained from scratch </s>
26
<s>[INST] What is a PLM? [/INST] The performance of pre-trained language models (PLMs) improves significantly when fine-tuned for downstream tasks, surpassing the perfor- mance of models trained from scratch </s>
27
<s>[INST] How does the performance of PLMs compare to models trained from scratch? [/INST] The performance of pre-trained language models (PLMs) improves significantly when fine-tuned for downstream tasks, surpassing the perfor- mance of models trained from scratch </s>
28
<s>[INST] What motivated researchers to train larger PLMs on even bigger datasets? [/INST] These characteristics of language models motivated researchers to train larger PLMs on even bigger datasets and found that scaling model and dataset size further improve the generalization abilities </s>
29
<s>[INST] Scaling model and dataset size further improve what? [/INST] These characteristics of language models motivated researchers to train larger PLMs on even bigger datasets and found that scaling model and dataset size further improve the generalization abilities </s>
30
<s>[INST] Modern LLMs are capable of performing various tasks like code generation, text generation, tool manipulation and understanding in what settings? [/INST] Now modern LLMs are capable of performing various tasks like code generation, text generation, tool manipulation, rea- soning, and understanding in zero-shot and few-shot settings in diverse domains, even without requiring any fine-tuning on downstream tasks [8], [9], [10] </s>
31
<s>[INST] What has sparked enthusiasm and excitement within the research community for the enhancement of? [/INST] This development has sparked enthusiasm and excitement within the research community for the enhancement of LLM architectures and training strategies, leading to the development of numerous LLMs [11], [12], [13], [8], [9], [10], [14] </s>
32
<s>[INST] What has led to the development of numerous LLMs? [/INST] This development has sparked enthusiasm and excitement within the research community for the enhancement of LLM architectures and training strategies, leading to the development of numerous LLMs [11], [12], [13], [8], [9], [10], [14] </s>
33
<s>[INST] What is a Titan GLaM LaMDA WebGPTJan MT-NLG Feb AlphaCodeMar CodeGen ChinchillaApr GPT-NeoX-20B PaLMTk-InstructMay UL2OPT Aug AlexaTM Sep SparrowOct GLM U-PaLM Flan-U-Palm Nov BLOOMGalacticamT0 ChatGPTDec OPT-IML Feb LLaMA Mar PanGu- BloombergGPT [/INST] 0 Titan GLaM LaMDA WebGPTJan MT-NLG Feb AlphaCodeMar CodeGen ChinchillaApr GPT-NeoX-20B PaLMTk-InstructMay UL2OPT Aug AlexaTM Sep SparrowOct GLM U-PaLM Flan-U-PaLM Nov BLOOMGalacticamT0 ChatGPTDec OPT-IML Feb LLaMA Mar PanGu-Σ BloombergGPT GPT-4AlpacaVicuna Claude BardApr HuaTuoWizardLMKoalaMay Xuan Yuan 2 </s>
34
<s>[INST] What do light blue rectangles represent? [/INST] 2: Chronological display of LLM releases: light blue rectangles represent ’pre-trained’ models, while dark rectangles correspond to ’instruction-tuned’ models </s>
35
<s>[INST] What do dark rectangles correspond to? [/INST] 2: Chronological display of LLM releases: light blue rectangles represent ’pre-trained’ models, while dark rectangles correspond to ’instruction-tuned’ models </s>
36
<s>[INST] The chart illustrates the increasing trend towards what? [/INST] The chart illustrates the increasing trend towards instruction-tuned models and open-source models, highlighting the evolving landscape and trends in natural language processing research </s>
37
<s>[INST] The chart highlights the evolving landscape and trends in what field? [/INST] The chart illustrates the increasing trend towards instruction-tuned models and open-source models, highlighting the evolving landscape and trends in natural language processing research </s>
38
<s>[INST] What did many research efforts focus on during the early days of Large Language Models? [/INST] During the early days of Large Language Models (LLMs), many research efforts focused on developing models for transfer learning to downstream tasks [11], [12], [15] until the emergence of models like GPT-3 [8], which demonstrated impressive performance even without fine-tuning </s>
39
<s>[INST] What model demonstrated impressive performance even without fine-tuning? [/INST] During the early days of Large Language Models (LLMs), many research efforts focused on developing models for transfer learning to downstream tasks [11], [12], [15] until the emergence of models like GPT-3 [8], which demonstrated impressive performance even without fine-tuning </s>
40
<s>[INST] What was a demand for open-source alternatives to GPT-3? [/INST] Due to the closed-source nature of GPT-3, there was a demand for open- source alternatives, leading to the development of various models [9], [10] operating at the scale of GPT-3 and trained on extensive web-based datasets [16], [17], [18], [19] </s>
41
<s>[INST] What model was developed at the scale of GPT3? [/INST] Due to the closed-source nature of GPT-3, there was a demand for open- source alternatives, leading to the development of various models [9], [10] operating at the scale of GPT-3 and trained on extensive web-based datasets [16], [17], [18], [19] </s>
42
<s>[INST] How were the models trained? [/INST] Due to the closed-source nature of GPT-3, there was a demand for open- source alternatives, leading to the development of various models [9], [10] operating at the scale of GPT-3 and trained on extensive web-based datasets [16], [17], [18], [19] </s>
43
<s>[INST] What did researchers propose that showed superior performance compared to GPT-3 across various tasks? [/INST] Subse- quently, researchers proposed several architectural designs and training strategies that showed superior performance compared to GPT-3 across various tasks [15], [14], [20], [21] </s>
44
<s>[INST] Instruction fine-tuning of LLMs refers to a specific training approach by incorporating additional prompts or instructions during what phase? [/INST] Instruction fine-tuning of LLMs refers to a specific training approach by incorporating additional prompts or instructions during the fine-tuning phase to guide the output and thus enable the users to have more fine- grained control over the outputs of LLMs </s>
45
<s>[INST] Parameter-efficient fine-tuning updates only a small number of parameters while still maintaining what? [/INST] However, parameter-efficient fine-tuning takes a different approach by updating only a small number of parameters while still maintaining good performance </s>
46
<s>[INST] What is the main focus of the book? [/INST] It also delves into the details of model development, architectures, training datasets, and other related concepts to provide a self-contained comprehensive overview of this direction </s>
47
<s>[INST] What does the book cover? [/INST] It also delves into the details of model development, architectures, training datasets, and other related concepts to provide a self-contained comprehensive overview of this direction </s>
48
<s>[INST] What does this article address? [/INST] Hence, this article addresses an important gap of providing a concise yet compre- hensive overview of the rapidly developing general direction of LLM research </s>
49
<s>[INST] What does the article provide? [/INST] Hence, this article addresses an important gap of providing a concise yet compre- hensive overview of the rapidly developing general direction of LLM research </s>
50
<s>[INST] •We present the first survey on the developments in LLM research with the specific aim of providing what? [/INST] •We present the first survey on the developments in LLM research with the specific aim of providing concise yet comprehensive overview of the direction </s>
51
<s>[INST] What are some of the concepts covered in this self-contained article? [/INST] •In this self-contained article, we cover a range of concepts to comprehend the general direction of LLMs, including background concepts, popular models, crucial discover- ies, related datasets and evaluation details etc </s>
52
<s>[INST] Background concepts, popular models, crucial discover- ies, related datasets and evaluation details are examples of what? [/INST] •In this self-contained article, we cover a range of concepts to comprehend the general direction of LLMs, including background concepts, popular models, crucial discover- ies, related datasets and evaluation details etc </s>
53
<s>[INST] What is the chronological order of LLMs throughout the article? [/INST] •Besides paying special attention to the chronological order of LLMs throughout the article, we also summarize major findings of the popular contributions, and providedetailed discussion on the key design and deployment aspects of LLMs to help practitioners to effectively leverage this technology </s>
54
<s>[INST] What does the article summarize major findings of? [/INST] •Besides paying special attention to the chronological order of LLMs throughout the article, we also summarize major findings of the popular contributions, and providedetailed discussion on the key design and deployment aspects of LLMs to help practitioners to effectively leverage this technology </s>
55
<s>[INST] What is the name of the first comprehensive overview of LLMs? [/INST] It is noteworthy that although this article is the first contri- bution in its own right in terms of providing a concise yet comprehensive overview of LLMs, our work complements the recent (and emerging) surveys of this direction, e </s>
56
<s>[INST] What does this article complement? [/INST] It is noteworthy that although this article is the first contri- bution in its own right in terms of providing a concise yet comprehensive overview of LLMs, our work complements the recent (and emerging) surveys of this direction, e </s>
57
<s>[INST] What is the purpose of this section? [/INST] Aligned with our objective of providing a comprehensive overview of this di- rection, this section offers a comprehensive yet concise outline of the fundamental concepts </s>
58
<s>[INST] What does this section offer? [/INST] Aligned with our objective of providing a comprehensive overview of this di- rection, this section offers a comprehensive yet concise outline of the fundamental concepts </s>
59
<s>[INST] Tokenization LLMs are trained on what to predict text? [/INST] Tokenization LLMs are trained on text to predict text, and similar to other natural language processing systems, they use tokeniza- tion [37] as the essential preprocessing step </s>
60
<s>[INST] What is used as the essential preprocessing step? [/INST] Tokenization LLMs are trained on text to predict text, and similar to other natural language processing systems, they use tokeniza- tion [37] as the essential preprocessing step </s>
61
<s>[INST] WordPiece was introduced in what year? [/INST] WordPiece [41]: It was introduced in [41] as a novel text segmentation technique for Japanese and Korean languages to improve the language model for voice search systems </s>
62
<s>[INST] What was the purpose of WordPeece? [/INST] WordPiece [41]: It was introduced in [41] as a novel text segmentation technique for Japanese and Korean languages to improve the language model for voice search systems </s>
63
<s>[INST] Why was wordpiece introduced in 41? [/INST] WordPiece [41]: It was introduced in [41] as a novel text segmentation technique for Japanese and Korean languages to improve the language model for voice search systems </s>
64
<s>[INST] What happens when pairs of adjacent symbols are replaced by a new symbol? [/INST] It is an iterative process of generating tokens where pairs of adjacent symbols are replaced by a new symbol, and the occurrences of the most occurring symbols in the input text are merged together </s>
65
<s>[INST] What is the iterative process of generating tokens? [/INST] It is an iterative process of generating tokens where pairs of adjacent symbols are replaced by a new symbol, and the occurrences of the most occurring symbols in the input text are merged together </s>
66
<s>[INST] What can be conceived as the programming by the O of which stimuli will be processed or encoded and in what order? [/INST] Se- lective attention can be conceived as “the programming by the O of which stimuli will be processed or encoded and in what order this will occur” [42] </s>
67
<s>[INST] Where does the definition of positional encoding originate? [/INST] While this definition has its roots in visual perception, it has uncanny similarities with the recently formulated attention [43], [44] (which stimuli will be processed) and positional encoding (in what order this will occur) [44] in LLMs </s>
68
<s>[INST] What definition has uncanny similarities with the recently formulated attention? [/INST] While this definition has its roots in visual perception, it has uncanny similarities with the recently formulated attention [43], [44] (which stimuli will be processed) and positional encoding (in what order this will occur) [44] in LLMs </s>
69
<s>[INST] How does the attention mechanism compute a representation of the input sequences? [/INST] Attention in LLMs The attention mechanism computes a representation of the input sequences by relating different positions ( tokens ) of these sequences </s>
70
<s>[INST] What is the self-attention also known as? [/INST] Self-Attention [44]: The self-attention is also known as intra-attention since all the queries, keys and values come from the same block (encoder or decoder) </s>
71
<s>[INST] What is intra-attention? [/INST] Self-Attention [44]: The self-attention is also known as intra-attention since all the queries, keys and values come from the same block (encoder or decoder) </s>
72
<s>[INST] Where do queries, keys and values come from? [/INST] Self-Attention [44]: The self-attention is also known as intra-attention since all the queries, keys and values come from the same block (encoder or decoder) </s>
73
<s>[INST] The self-attention layer connects all sequence positions to each other with what? [/INST] The self-attention layer connects all the sequence positions to each other with O(1) space complexity which is highly desirable for learning long- range dependencies in the input </s>
74
<s>[INST] What is highly desirable for learning long-range dependencies in the input? [/INST] The self-attention layer connects all the sequence positions to each other with O(1) space complexity which is highly desirable for learning long- range dependencies in the input </s>
75
<s>[INST] What do the outputs of the encoder blocks act as? [/INST] Cross Attention: In encoder-decoder architectures, the outputs of the encoder blocks act as the queries to the intermediate representation of the decoder, which provides the keys and values to calculate a representation of the decoder conditioned on the encoder </s>
76
<s>[INST] What provides the keys and values to calculate a representation of the decoder? [/INST] Cross Attention: In encoder-decoder architectures, the outputs of the encoder blocks act as the queries to the intermediate representation of the decoder, which provides the keys and values to calculate a representation of the decoder conditioned on the encoder </s>
77
<s>[INST] How many input tokens did GPT series LLMs process in a reasonable time? [/INST] An approximation to the self-attention was proposed in [45], which greatly enhanced the capacity of GPT series LLMs to process a greater number of input tokens in a reasonable time </s>
78
<s>[INST] What was proposed in [45]? [/INST] An approximation to the self-attention was proposed in [45], which greatly enhanced the capacity of GPT series LLMs to process a greater number of input tokens in a reasonable time </s>
79
<s>[INST] What approach does Flash Attention use to process the blocks of input in GPU on-chip SRAM rather than doing IO for every token from the High Bandwith Memory? [/INST] Flash Attention uses the classical input tiling approach in order to process the blocks of the input in GPU on-chip SRAM rather than doing IO for every token from the High Bandwith Memory (HBM) </s>
80
<s>[INST] What is the most straightforward approach to adding sequence order information? [/INST] Absolute: This is the most straightforward approach to adding the sequence order information by assigning a unique identifier to each position of the sequence before passing it to the attention module </s>
81
<s>[INST] What is assigned to each position of the sequence before passing it to the attention module? [/INST] Absolute: This is the most straightforward approach to adding the sequence order information by assigning a unique identifier to each position of the sequence before passing it to the attention module </s>
82
<s>[INST] What is calculated to pass the information of the rel- ative dependencies of different tokens appearing at different locations in the sequence? [/INST] Relative: In order to pass the information of the rel- ative dependencies of different tokens appearing at different locations in the sequence, a relative positional encoding is calculated by some kind of learning </s>
83
<s>[INST] How is a relative positional encoding calculated? [/INST] Relative: In order to pass the information of the rel- ative dependencies of different tokens appearing at different locations in the sequence, a relative positional encoding is calculated by some kind of learning </s>
84
<s>[INST] What are two famous types of relative encodings? [/INST] Two famous types of relative encodings are: Alibi [47] In this approach, a scalar bias is subtracted from the attention score calculated using two tokens which increases with the distance between the positions of the tokens </s>
85
<s>[INST] What is subtracted from the attention score calculated using two tokens in Alibi? [/INST] Two famous types of relative encodings are: Alibi [47] In this approach, a scalar bias is subtracted from the attention score calculated using two tokens which increases with the distance between the positions of the tokens </s>
86
<s>[INST] What involves the rotation of the query and key represen- tations at an angle proportional to their absolute positions of the tokens in the input sequence? [/INST] RoPE [48] involves the rotation of the query and key represen- tations at an angle proportional to their absolute positions of the tokens in the input sequence </s>
87
<s>[INST] What is a gated linear unit? [/INST] GLU variants [55]: Gated Linear Unit [56] is a neural network layer that is an element-wise product ( ⊗) of a linear transformation and a sigmoid transformed ( σ) linear projection of the input given as GLU (x, W, V, b, c ) = (xW+b)⊗σ(xV+c), (2) where Xis the input of layer and l,W, b, V andcare learned parameters </s>
88
<s>[INST] What is the input of layer and l,W,V, b,c? [/INST] GLU variants [55]: Gated Linear Unit [56] is a neural network layer that is an element-wise product ( ⊗) of a linear transformation and a sigmoid transformed ( σ) linear projection of the input given as GLU (x, W, V, b, c ) = (xW+b)⊗σ(xV+c), (2) where Xis the input of layer and l,W, b, V andcare learned parameters </s>
89
<s>[INST] What is the input of layer and l,W, b, V andcare learned parameters? [/INST] GLU variants [55]: Gated Linear Unit [56] is a neural network layer that is an element-wise product ( ⊗) of a linear transformation and a sigmoid transformed ( σ) linear projection of the input given as GLU (x, W, V, b, c ) = (xW+b)⊗σ(xV+c), (2) where Xis the input of layer and l,W, b, V andcare learned parameters </s>
90
<s>[INST] When was GLU modified? [/INST] GLU was modified in [55] to evaluate the effect of different variations in the training and testing of transformers, resulting in better empirical results </s>
91
<s>[INST] What did GLU result in? [/INST] GLU was modified in [55] to evaluate the effect of different variations in the training and testing of transformers, resulting in better empirical results </s>
92
<s>[INST] What computes statistics over all the hidden units in a layer? [/INST] LayerNorm: Layer norm computes statistics over all the hidden units in a layer (l)as follows: ul=1 nnX ial i σl=vuut1 nnX i(al i−ul)2, (3) where nis the number of neurons in the layer landal iis the summed input of the ineuron in layer l </s>
93
<s>[INST] What is the summed input of the ineuron in layer l? [/INST] LayerNorm: Layer norm computes statistics over all the hidden units in a layer (l)as follows: ul=1 nnX ial i σl=vuut1 nnX i(al i−ul)2, (3) where nis the number of neurons in the layer landal iis the summed input of the ineuron in layer l </s>
94
<s>[INST] What did RMSNorm propose that the invariance properties of LayerNorm are spurious? [/INST] RMSNorm: [57] proposed that the invariance properties of LayerNorm are spurious, and we can achieve the same performance benefits as we get from LayerNorm by using a computationally efficient normalization technique that trades off re-centering invariance with speed </s>
95
<s>[INST] How can we achieve the same performance benefits as we get from LayerNorm? [/INST] RMSNorm: [57] proposed that the invariance properties of LayerNorm are spurious, and we can achieve the same performance benefits as we get from LayerNorm by using a computationally efficient normalization technique that trades off re-centering invariance with speed </s>
96
<s>[INST] What did the original implementa-tion use after the residual con- nection, commonly called post-LN? [/INST] The original implementa- tion [44] used layer normalization after the residual con- nection, commonly called post-LN, concerning the order of Multihead attention – Residual – LN </s>
97
<s>[INST] What is the order of Multihead attention – Residual – LN called? [/INST] The original implementa- tion [44] used layer normalization after the residual con- nection, commonly called post-LN, concerning the order of Multihead attention – Residual – LN </s>
98
<s>[INST] What is another order of normalization called? [/INST] There is another order of the normalization, referred to as pre-LN [58] due to the position of the normalization step before the self-attention layer as in LN – Multihead attention – Residual </s>
99
<s>[INST] Why is pre-LN referred to? [/INST] There is another order of the normalization, referred to as pre-LN [58] due to the position of the normalization step before the self-attention layer as in LN – Multihead attention – Residual </s>