text
stringlengths
398
4.1k
Large Language Models: A Survey Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu Richard Socher, Xavier Amatriain, Jianfeng Gao Abstract—Large Language Models (LLMs) have drawn a that have different starting points and velocity: statistical lan- lot of attention due to their strong performance on a wide guage models, neural language models, pre-trained language range of natural language tasks, since the release of ChatGPT models and LLMs. in November 2022. LLMs’ ability of general-purpose language understanding and generation is acquired by training billions of Statisticallanguagemodels(SLMs)viewtextasasequence model’s parameters on massive amounts of text data, as predicted of words, and estimate the probability of text as the product by scaling laws [1], [2]. The research area of LLMs, while very of their word probabilities. The dominating form of SLMs recent, is evolving rapidly in many different ways. In this paper, are Markov chain models known as the n-gram models, we review some of the most prominent LLMs, including three which compute the probability of a word conditioned on its popular LLM families (GPT, LLaMA, PaLM), and discuss their immediate proceeding n − 1 words. Since word probabilities characteristics, contributions and limitations. We also give an are estimated using word and n-gram counts collected from overview of techniques developed to build, and augment LLMs. text corpora, the model needs to deal with data sparsity (i.e., We then survey popular datasets prepared for LLM training, assigning zero probabilities to unseen words or n-grams) by fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs using smoothing, where some probability mass of the model on a set of representative benchmarks. Finally, we conclude is reserved for unseen n-grams [12]. N-gram models are the paper by discussing open challenges and future research widely used in many NLP systems. However, these models directions. are incomplete in that they cannot fully capture the diversity and variability of natural language due to data sparsity. I. INTRODUCTION Early neural language models (NLMs) [13], [14], [15], [16] Language modeling is a long-standing research topic, dat- deal with data sparsity by mapping words to low-dimensional ing back to the 1950s with Shannon’s application of informa- continuous vectors (embedding vectors) and predict the next tion theory to human language, where he measured how well word based on the aggregation of the embedding vectors of simple n-gram language models predict or compress natural its proceeding words using neural networks. The embedding language text [3]. Since then, statistical language modeling vectors learned by NLMs define a hidden space where the became fundamental to many natural language understanding semantic similarity between vectors can be readily computed and generation tasks, ranging from speech recognition, ma- as their distance. This opens the door to computing semantic chine translation, to information retrieval [4], [5], [6]. similarityofanytwoinputsregardlesstheirforms(e.g.,queries The recent advances on transformer-based large language vs. documents in Web search [17], [18], sentences in
age understanding semantic similarity between vectors can be readily computed and generation tasks, ranging from speech recognition, ma- as their distance. This opens the door to computing semantic chine translation, to information retrieval [4], [5], [6]. similarityofanytwoinputsregardlesstheirforms(e.g.,queries The recent advances on transformer-based large language vs. documents in Web search [17], [18], sentences in different models (LLMs), pretrained on Web-scale text corpora, signif- languagesinmachinetranslation[19],[20])ormodalities(e.g., icantly extended the capabilities of language models (LLMs). imageandtextinimagecaptioning[21],[22]).EarlyNLMsare For example, OpenAI’s ChatGPT and GPT-4 can be used not task-specific models, in that they are trained on task-specific only for natural language processing, but also as general task data and their learned hidden space is task-specific. solvers to power Microsoft’s Co-Pilot systems, for instance, Pre-trained language models (PLMs), unlike early NLMs, can follow human instructions of complex new tasks per- are task-agnostic. This generality also extends to the learned forming multi-step reasoning when needed. LLMs are thus hidden embedding space. The training and inference of PLMs becoming the basic building block for the development of follows the pre-training and fine-tuning paradigm, where lan- general-purpose AI agents or artificial general intelligence guage models with recurrent neural networks [23] or trans- (AGI). formers [24], [25], [26] are pre-trained on Web-scale unlabeled As the field of LLMs is moving fast, with new findings, textcorporaforgeneraltaskssuchaswordprediction,andthen models and techniques being published in a matter of months finetuned to specific tasks using small amounts of (labeled) or weeks [7], [8], [9], [10], [11], AI researchers and practi- task-specific data. Recent surveys on PLMs include [8], [27], tioners often find it challenging to figure out the best recipes [28]. to build LLM-powered AI systems for their tasks. This paper Large language models (LLMs) mainly refer to gives a timely survey of the recent advances on LLMs. We transformer-based neural language models 1 that contain hope this survey will prove a valuable and accessible resource tens to hundreds of billions of parameters, which are pre- for students, researchers and developers. trained on massive text data, such as PaLM [31], LLaMA LLMsarelarge-scale,pre-trained,statisticallanguagemod- [32], and GPT-4 [33], as summarized in Table III. Compared els based on neural networks. The recent success of LLMs is 1Recently, several very promising non-transformer LLMs have been pro- an accumulation of decades of research and development of posed, such as the LLMs based on structured state space models [29], [30]. language models, which can be categorized into four waves See Section VII for more details. Fig. 1: LLM Capabilities. to PLMs, LLMs are not only much larger in model size, but LLMs are used, and augmented for real-world applications also exhibit stronger language understanding and generation Sections V and VI review popular datasets and benchmarks for abilities, and more importantly, emergent abilities that are evaluating LLMs, and summarize the reported LLM evaluation not present in smaller-s
Fig. 1: LLM Capabilities. to PLMs, LLMs are not only much larger in model size, but LLMs are used, and augmented for real-world applications also exhibit stronger language understanding and generation Sections V and VI review popular datasets and benchmarks for abilities, and more importantly, emergent abilities that are evaluating LLMs, and summarize the reported LLM evaluation not present in smaller-scale language models. As illustrated results. Finally, Section VII concludes the paper by summa- in Fig. 1, these emergent abilities include (1) in-context rizing the challenges and future research directions. learning, where LLMs learn a new task from a small set of examples presented in the prompt at inference time, (2) II. LARGE LANGUAGE MODELS instruction following, where LLMs, after instruction tuning, can follow the instructions for new types of tasks without In this section we start with a review of early pre-trained using explicit examples, and (3) multi-step reasoning, where neural language models as they are the base of LLMs, and LLMs can solve a complex task by breaking down that task then focus our discussion on three families of LLMs: GPT, into intermediate reasoning steps as demonstrated in the LlaMA, and PaLM. Table I provides an overview of some of chain-of-thought prompt [34]. LLMs can also be augmented these models and their characteristics. by using external knowledge and tools [35], [36] so that they can effectively interact with users and environment [37], A. Early Pre-trained Neural Language Models and continually improve itself using feedback data collected through interactions (e.g. via reinforcement learning with Language modeling using neural networks was pioneered human feedback (RLHF)). by [38], [39], [40]. Bengio et al. [13] developed one of the first Through advanced usage and augmentation techniques, neurallanguagemodels(NLMs)thatarecomparableton-gram LLMs can be deployed as so-called AI agents: artificial entities models. Then, [14] successfully applied NLMs to machine that sense their environment, make decisions, and take actions. translation. The release of RNNLM (an open source NLM Previousresearchhasfocusedondevelopingagentsforspecific toolkit) by Mikolov [41], [42] helped significantly popularize tasks and domains. The emergent abilities demonstrated by NLMs. Afterwards, NLMs based on recurrent neural networks LLMs make it possible to build general-purpose AI agents (RNNs) and their variants, such as long short-term memory based on LLMs. While LLMs are trained to produce responses (LSTM)[19]andgatedrecurrentunit(GRU)[20],werewidely in static settings, AI agents need to take actions to interact with usedformanynaturallanguageapplicationsincludingmachine dynamic environment. Therefore, LLM-based agents often translation, text generation and text classification [43]. need to augment LLMs to e.g., obtain updated information Then, the invention of the Transformer architecture [44] from external knowledge bases, verify whether a system action marks another milestone in the development of NLMs. By produces the expected result, and cope with when things do applying self-attention to compute in parallel for every word not go as expected, etc. We will discuss in detail LLM-based in a sentence or document an “attention score” to model the agents in Section IV. influence each wor
from external knowledge bases, verify whether a system action marks another milestone in the development of NLMs. By produces the expected result, and cope with when things do applying self-attention to compute in parallel for every word not go as expected, etc. We will discuss in detail LLM-based in a sentence or document an “attention score” to model the agents in Section IV. influence each word has on another, Transformers allow for In the rest of this paper, Section II presents an overview of much more parallelization than RNNs, which makes it possible stateoftheartofLLMs,focusingonthreeLLMfamilies(GPT, to efficiently pre-train very big language models on large LLaMA and PaLM) and other representative models. Section amounts of data on GPUs. These pre-trained language models III discusses how LLMs are built. Section IV discusses how (PLMs) can be fine-tuned for many downstream tasks. Fig. 2: The paper structure. WegroupearlypopularTransformer-basedPLMs,basedon BERT (Birectional Encoder Representations from Trans- their neural architectures, into three main categories: encoder- formers) [24] is one of the most widely used encoder-only only, decoder-only, and encoder-decoder models. Comprehen- language models. BERT consists of three modules: (1) an sive surveys of early PLMs are provided in [43], [28]. embedding module that converts input text into a sequence of embedding vectors, (2) a stack of Transformer encoders 1) Encoder-onlyPLMs: Asthenamesuggests,theencoder- that converts embedding vectors into contextual representation only models only consist of an encoder network. These models vectors, and (3) a fully connected layer that converts the are originally developed for language understanding tasks, representation vectors (at the final layer) to one-hot vectors. such as text classification, where the models need to predict a BERT is pre-trained uses two objectives: masked language class label for an input text. Representative encoder-only mod- modeling(MLM)andnextsentenceprediction.Thepre-trained els include BERT and its variants, e.g., RoBERTa, ALBERT, BERT model can be fine-tuned by adding a classifier layer DeBERTa, XLM, XLNet, UNILM, as to be described below. for many language understanding tasks, ranging from text TABLE I: High-level Overview of Popular Language Models Type Model Name #Parameters Release Base Models Open #Tokens Training dataset Source BERT 110M, 340M 2018 - ✓ 137B BooksCorpus, English Wikipedia RoBERTa 355M 2019 - ✓ 2.2T BooksCorpus, English Wikipedia, CC-NEWS, STORIES (a subset of Common Crawl), Reddit Encoder-Only ALBERT 12M, 18M, 60M, 2019 -
355M 2019 - ✓ 2.2T BooksCorpus, English Wikipedia, CC-NEWS, STORIES (a subset of Common Crawl), Reddit Encoder-Only ALBERT 12M, 18M, 60M, 2019 - ✓ 137B BooksCorpus, English Wikipedia 235M DeBERTa - 2020 - ✓ - BooksCorpus,EnglishWikipedia,STORIES,Red- dit content XLNet 110M, 340M 2019 - ✓ 32.89B BooksCorpus, English Wikipedia, Giga5, Com- mon Crawl, ClueWeb 2012-B Decoder-only GPT-1 120M 2018 - ✓ 1.3B BooksCorpusGPT-2 1.5B 2019 - ✓ 10B Reddit outbound T5 (Base) 223M 2019 - ✓ 156B Common Crawl Encoder-Decoder MT5 (Base) 300M 2020 - ✓ - New Common Crawl-based dataset in 101 lan-guages (m Common Crawl) BART (Base) 139M 2019 - ✓ - Corrupting text GPT-3 125M, 350M, 2020 × 300B Common Crawl (filtered), WebText2, Books1, 760M, 1.3B, 2.7B, Books2, Wikipedia 6.7B, 13B, 175B GPT Family CODEX 12B 2021 GPT ✓ - Public GitHub software repositoriesWebGPT 760M, 13B, 175B 2021 GPT-3 × - ELI5 GPT-4 1.76T 2023 - × 13T - LLaMA1 7B, 13B, 33B, 65B 2023 - ✓ 1T, 1.4T Online sources LLaMA2 7B, 13B, 34B, 70B 2023 - ✓ 2T Online sources Alpaca 7B 2023 LLaMA1 ✓ - GPT-3.5 Vicuna-13B 13B 2023
1T, 1.4T Online sources LLaMA2 7B, 13B, 34B, 70B 2023 - ✓ 2T Online sources Alpaca 7B 2023 LLaMA1 ✓ - GPT-3.5 Vicuna-13B 13B 2023 LLaMA1 ✓ - GPT-3.5 LLaMA Family Koala 13B 2023 LLaMA ✓ - Dialogue dataMistral-7B 7.3B 2023 ✓ - - Code Llama 34 2023 LLaMA2 ✓ 500B Publicly available code LongLLaMA 3B, 7B 2023 OpenLLaMA ✓ 1T - LLaMA-Pro-8B 8.3B 2024 LLaMA2-7B ✓ 80B Code and math corpora TinyLlama-1.1B 1.1B 2024 LLaMA1.1B ✓ 3T SlimPajama, Starcoderdata PaLM 8B, 62B, 540B 2022 - × 780B Web documents, books, Wikipedia, conversations, GitHub code U-PaLM 8B, 62B, 540B 2022 - × 1.3B Web documents, books, Wikipedia, conversations, GitHub code PaLM Family PaLM-2 340B 2023 - ✓ 3.6T Web documents, books, code, mathematics, con-versational data Med-PaLM 540B 2022 PaLM × 780B HealthSearchQA, MedicationQA, LiveQA Med-PaLM 2 - 2023 PaLM 2 × - MedQA, MedMCQA, HealthSearchQA, LiveQA, MedicationQA FLAN 137B 2021 LaMDA-PT ✓ - Web documents, code, dialog data, Wikipedia Gopher 280B 2021 - × 300B MassiveText ERNIE 4.0 10B 2023 - × 4TB Chinese text Retro 7.5B 2021 - × 600B MassiveText
2021 - × 300B MassiveText ERNIE 4.0 10B 2023 - × 4TB Chinese text Retro 7.5B 2021 - × 600B MassiveText LaMDA 137B 2022 - × 168B public dialog data and web documents ChinChilla 70B 2022 - × 1.4T MassiveText Galactia-120B 120B 2022 - 450B Other Popular LLMs CodeGen 16.1B 2022 - ✓ - THE PILE, BIGQUERY, BIGPYTHONBLOOM 176B 2022 - ✓ 366B ROOTS Zephyr 7.24B 2023 Mistral-7B ✓ 800B Synthetic data Grok-0 33B 2023 - × - Online source ORCA-2 13B 2023 LLaMA2 - 2001B - StartCoder 15.5B 2023 - ✓ 35B GitHub MPT 7B 2023 - ✓ 1T RedPajama, m Common Crawl, S2ORC, Common Crawl Mixtral-8x7B 46.7B 2023 - ✓ - Instruction dataset Falcon 180B 180B 2023 - ✓ 3.5T RefinedWeb Gemini 1.8B, 3.25B 2023 ✓ - Web documents, books, and code, image data, audio data, video data DeepSeek-Coder 1.3B, 6.7B, 33B 2024 - ✓ 2T GitHub’s Markdown and StackExchange DocLLM 1B,7B 2024 - × 2T IIT-CDIP Test Collection 1.0, DocBank classification, question answering to language inference. A larger mini-batches and learning rates. ALBERT [45] uses two high-level overview of BERT framework is shown in Fig 3. As parameter-reduction techniques to lower memory consum
B,7B 2024 - × 2T IIT-CDIP Test Collection 1.0, DocBank classification, question answering to language inference. A larger mini-batches and learning rates. ALBERT [45] uses two high-level overview of BERT framework is shown in Fig 3. As parameter-reduction techniques to lower memory consumption BERT significantly improved state of the art on a wide range and increase the training speed of BERT: (1) splitting the of language understanding tasks when it was published, the AI embedding matrix into two smaller matrices, and (2) using community was inspired to develop many similar encoder-only repeating layers split among groups. DeBERTa (Decoding- language models based on BERT. enhanced BERT with disentangled attention) [26] improves the RoBERTa [25] significantly improves the robustness of BERT and RoBERTa models using two novel techniques. The BERT using a set of model design choices and training strate- first is the disentangled attention mechanism, where each word gies, such as modifying a few key hyperparameters, removing is represented using two vectors that encode its content and thenext-sentencepre-trainingobjectiveandtrainingwithmuch position, respectively, and the attention weights among wordsFig. 3: Overall pre-training and fine-tuning procedures for BERT. Courtesy of [24] Fig. 5: Cross-lingual language model pretraining. The MLM arecomputedusingdisentangledmatricesontheircontentsand objective is similar to BERT, but with continuous streams relative positions, respectively. Second, an enhanced mask de- of text as opposed to sentence pairs. The TLM objective coder is used to incorporate absolute positions in the decoding extends MLM to pairs of parallel sentences. To predict a layer to predict the masked tokens in model pre-training. In masked English word, the model can attend to both the English addition, a novel virtual adversarial training method is used for sentence and its French translation, and is encouraged to align fine-tuning to improve models’ generalization. ELECTRA [46] English and French representations. Courtesy of [47]. usesanewpre-trainingtask,knownasreplacedtokendetection (RTD),whichisempiricallyproventobemoresample-efficient than MLM. Instead of masking the input, RTD corrupts it by all permutations of the factorization order. UNILM (UNIfied replacingsometokenswithplausiblealternativessampledfrom pre-trained Language Model) [49] is pre-trained using three a small generator network. Then, instead of training a model types of language modeling tasks: unidirectional, bidirectional, that predicts the original identities of the corrupted tokens, a and sequence-to-sequence prediction. This is achieved by discriminative model is trained to predict whether a token in employing a shared Transformer network and utilizing specific the corrupted input was replaced by a generated sample or not. self-attention masks to control what context the prediction is RTD is more sample-efficient tha
uage modeling tasks: unidirectional, bidirectional, that predicts the original identities of the corrupted tokens, a and sequence-to-sequence prediction. This is achieved by discriminative model is trained to predict whether a token in employing a shared Transformer network and utilizing specific the corrupted input was replaced by a generated sample or not. self-attention masks to control what context the prediction is RTD is more sample-efficient than MLM because the former conditioned on, as illustrated in Fig 6. The pre-trained model is defined over all input tokens rather than just the small subset can be fine-tuned for both natural language understanding and being masked out, as illustrated in Fig 4. generation tasks. Fig. 4: A comparison between replaced token detection and masked language modeling. Courtesy of [46]. Fig. 6: Overview of unified LM pre-training. The model XLMs [47] extended BERT to cross-lingual language parameters are shared across the LM objectives (i.e., bidirec- models using two methods: (1) a unsupervised method that tional LM, unidirectional LM, and sequence-to-sequence LM). only relies on monolingual data, and (2) a supervised method Courtesy of [49]. that leverages parallel data with a new cross-lingual language model objective, as illustrated in Fig 5. XLMs had obtained state-of-the-art results on cross-lingual classification, unsuper- 2) Decoder-only PLMs: Two of the most widely used visedandsupervisedmachinetranslation,atthetimetheywere decoder-only PLMs are GPT-1 and GPT-2, developed by proposed. OpenAI. These models lay the foundation to more powerful There are also encoder-only language models that leverage LLMs subsequently, i.e., GPT-3 and GPT-4. the advantages of auto-regressive (decoder) models for model GPT-1 [50] demonstrates for the first time that good training and inference. Two examples are XLNet and UNILM. performanceoverawiderangeofnaturallanguagetaskscanbe XLNet [48] is based on Transformer-XL, pre-trained using a obtained by Generative Pre-Training (GPT) of a decoder-only generalized autoregressive method that enables learning bidi- Transformer model on a diverse corpus of unlabeled text in a rectional contexts by maximizing the expected likelihood over self-supervised learning fashion (i.e., next word/token predic-tion), followed by discriminative fine-tuning on each specific B. Large Language Model Families downstream task (with much fewer samples), as illustrated in Large language models (LLMs) mainly refer to Fig 7. GPT-1 paves the way for subsequent GPT models, with transformer-based PLMs that contain tens to hundreds each version improving upon the architecture and achieving of billions of parameters. Compared to PLMs reviewed above, better performance on various language tasks. LLMs are not only much larger in model size, but also exhibit stronger language understanding and generation and emergent abilities that are not present in smaller-scale models. In what follows, we review three LLM families: GPT, LLaMA, and PaLM, as illustrated in Fig 8.
stronger language understanding and generation and emergent abilities that are not present in smaller-scale models. In what follows, we review three LLM families: GPT, LLaMA, and PaLM, as illustrated in Fig 8. 1) The GPT Family: Generative Pre-trained Transform- ers (GPT) are a family of decoder-only Transformer-based language models, developed by OpenAI. This family con- sists of GPT-1, GPT-2, GPT-3, InstrucGPT, ChatGPT, GPT-4, CODEX, and WebGPT. Although early GPT models, such as GPT-1 and GPT-2, are open-source, recent models, such as Fig.7:High-leveloverviewofGPTpretraining,andfine-tuning GPT-3 and GPT-4, are close-source and can only be accessed steps. Courtesy of OpenAI. via APIs. GPT-1 and GPT-2 models have been discussed in the early PLM subsection. We start with GPT-3 below. GPT-3 [56] is a pre-trained autoregressive language model with 175 billion parameters. GPT-3 is widely considered as GPT-2 [51] shows that language models are able to learn the first LLM in that it not only is much larger than previous to perform specific natural language tasks without any explicit PLMs, but also for the first time demonstrates emergent supervisionwhentrainedonalargeWebTextdatasetconsisting abilities that are not observed in previous smaller PLMs. GPT- of millions of webpages. The GPT-2 model follows the model 3 shows the emergent ability of in-context learning, which designs of GPT-1 with a few modifications: Layer normal- means GPT-3 can be applied to any downstream tasks without ization is moved to the input of each sub-block, additional any gradient updates or fine-tuning, with tasks and few-shot layer normalization is added after the final self-attention block, demonstrations specified purely via text interaction with the initialization is modified to account for the accumulation on model. GPT-3 achieved strong performance on many NLP the residual path and scaling the weights of residual layers, tasks, including translation, question-answering, and the cloze vocabulary size is expanded to 50,25, and context size is tasks, as well as several ones that require on-the-fly reasoning increased from 512 to 1024 tokens. or domain adaptation, such as unscrambling words, using a novel word in a sentence, 3-digit arithmetic. Fig 9 plots the 3) Encoder-DecoderPLMs: In[52],Raffleetal.showsthat performanceofGPT-3asafunctionofthenumberofexamples almost all NLP tasks can be cast as a sequence-to-sequence in in-context prompts. generation task. Thus, an encoder-decoder language model, by CODEX [57], released by OpenAI in March 2023, is a design, is a unified model in that it can per
novel word in a sentence, 3-digit arithmetic. Fig 9 plots the 3) Encoder-DecoderPLMs: In[52],Raffleetal.showsthat performanceofGPT-3asafunctionofthenumberofexamples almost all NLP tasks can be cast as a sequence-to-sequence in in-context prompts. generation task. Thus, an encoder-decoder language model, by CODEX [57], released by OpenAI in March 2023, is a design, is a unified model in that it can perform all natural general-purpose programming model that can parse natural language understanding and generation tasks. Representative language and generate code in response. CODEX is a de- encoder-decoder PLMs we will review below are T5, mT5, scendant of GPT-3, fine-tuned for programming applications MASS, and BART. on code corpora collected from GitHub. CODEX powers Microsoft’s GitHub Copilot. T5 [52] is a Text-to-Text Transfer Transformer (T5) model, WebGPT[58]isanotherdescendantofGPT-3,fine-tunedto where transfer learning is effectively exploited for NLP via an answer open-ended questions using a text-based web browser, introduction of a unified framework in which all NLP tasks are facilitating users to search and navigate the web. Specifically, castasatext-to-textgenerationtask.mT5[53]isamultilingual WebGPT is trained in three steps. The first is for WebGPT variant of T5, which is pre-trained on a new Common Crawl- to learn to mimic human browsing behaviors using human based dataset consisting of texts in 101 languages. demonstration data. Then, a reward function is learned to MASS (MAsked Sequence to Sequence pre-training) [54] predict human preferences. Finally, WebGPT is refined to adopts the encoder-decoder framework to reconstruct a sen- optimize the reward function via reinforcement learning and tence fragment given the remaining part of the sentence. The rejection sampling. encoder takes a sentence with randomly masked fragment To enable LLMs to follow expected human instructions, (several consecutive tokens) as input, and the decoder predicts InstructGPT [59] is proposed to align language models with the masked fragment. In this way, MASS jointly trains the user intent on a wide range of tasks by fine-tuning with encoder and decoder for language embedding and generation, human feedback. Starting with a set of labeler-written prompts respectively. and prompts submitted through the OpenAI API, a dataset of labeler demonstrations of the desired model behavior is BART [55] uses a standard sequence-to-sequence transla- collected. Then GPT-3 is fine-tuned on this dataset. Then, a tionmodelarchitecture.Itispre-trainedbycorruptingtextwith dataset of human-ranked model outputs is collected to further an arbitrary noising function, and then learning to reconstruct fine-tune the model using reinforcement learning. The method the original text. is known Reinforcement Learning from Human Feedback Fig. 8: Popular LLM Families. launch of ChatGPT (Chat Generative Pre-trained Transformer) [60] on November 30, 2022. C
sing reinforcement learning. The method the original text. is known Reinforcement Learning from Human Feedback Fig. 8: Popular LLM Families. launch of ChatGPT (Chat Generative Pre-trained Transformer) [60] on November 30, 2022. ChatGPT is chatbot that enables users to steer a conversation to complete a wide range of tasks such as question answering, information seeking, text summarization, and more. ChatGPT is powered by GPT-3.5 (and later by GPT-4), a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. GPT-4 [33] is the latest and most powerful LLM in the GPT family. Launched in March, 2023, GPT-4 is a multi- modal LLM in that it can take image and text as inputs and Fig. 9: GPT-3 shows that larger models make increasingly produce text outputs. While still less capable than humans efficient use of in-context information. It shows in-context in some of the most challenging real-world scenarios, GPT-4 learning performance on a simple task requiring the model to exhibits human-level performance on various professional and remove random symbols from a word, both with and without academic benchmarks, including passing a simulated bar exam a natural language task description. Courtesy of [56]. with a score around the top 10% of test takers, as shown in Fig 11. Like early GPT models, GPT-4 was first pre-trained to predict next tokens on large text corpora, and then fine-tuned with RLHF to align model behaviors with human-desired ones. (RLHF), as shown in 10. The resultant InstructGPT models have shown improvements in truthfulness and reductions in 2) The LLaMA Family: LLaMA is a collection of founda- toxic output generation while having minimal performance tion language models, released by Meta. Unlike GPT models, regressions on public NLP datasets. LLaMA models are open-source, i.e., model weights are released to the research community under a noncommercial license. Thus, the LLaMA family grows rapidly as these models are widely used by many research groups to develop better open-source LLMs to compete the closed-source ones or todeveloptask-specificLLMsformission-criticalapplications.
license. Thus, the LLaMA family grows rapidly as these models are widely used by many research groups to develop better open-source LLMs to compete the closed-source ones or todeveloptask-specificLLMsformission-criticalapplications. The first set of LLaMA models [32] was released in Febru- ary 2023, ranging from 7B to 65B parameters. These models are pre-trained on trillions of tokens, collected from publicly available datasets. LLaMA uses the transformer architecture of GPT-3, with a few minor architectural modifications, including (1) using a SwiGLU activation function instead of ReLU, (2) using rotary positional embeddings instead of absolute positional embedding, and (3) using root-mean-squared layer- Fig. 10: The high-level overview of RLHF. Courtesy of [59]. normalization instead of standard layer-normalization. The open-source LLaMA-13B model outperforms the proprietary GPT-3 (175B) model on most benchmarks, making it a good The most important milestone of LLM development is the baseline for LLM research. collected from ShareGPT. Preliminary evaluation using GPT- 4 as a evaluator shows that Vicuna-13B achieves more than 90% quality of OpenAI’s ChatGPT, and Google’s Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. 13 shows the relative response quality of Vicuna and a few other well-known models by GPT-4. Another advantage of Vicuna-13B is its relative limited computational demand for model training. The training cost of Vicuna-13B is merely $300. Fig. 11: GPT-4 performance on academic and professional Fig. 13: Relative Response Quality of Vicuna and a few other exams, compared with GPT 3.5. Courtesy of [33]. well-known models by GPT-4. Courtesy of Vicuna Team. Like Alpaca and Vicuna, the Guanaco models [63] are also In July 2023, Meta, in partnership with Microsoft, released finetunedLLaMAmodelsusinginstruction-followingdata.But the LLaMA-2 collection [61], which include both foundation the finetuning is done very efficiently using QLoRA such language models and Chat models fine
well-known models by GPT-4. Courtesy of Vicuna Team. Like Alpaca and Vicuna, the Guanaco models [63] are also In July 2023, Meta, in partnership with Microsoft, released finetunedLLaMAmodelsusinginstruction-followingdata.But the LLaMA-2 collection [61], which include both foundation the finetuning is done very efficiently using QLoRA such language models and Chat models finetuned for dialog, known that finetuning a 65B parameter model can be done on a as LLaMA-2 Chat. The LLaMA-2 Chat models were reported single 48GB GPU. QLoRA back-propagates gradients through to outperform other open-source models on many public a frozen, 4-bit quantized pre-trained language model into Low benchmarks. Fig 12 shows the training process of LLaMA-2 Rank Adapters (LoRA). The best Guanaco model outperforms Chat. The process begins with pre-training LLaMA-2 using all previously released models on the Vicuna benchmark, publicly available online data. Then, an initial version of reaching 99.3% of the performance level of ChatGPT while LLaMA-2 Chat is built via supervised fine-tuning. Subse- only requiring 24 hours of fine-tuning on a single GPU. quently, the model is iteratively refined using RLHF, rejection Koala [64] is yet another instruction-following language samplingandproximalpolicyoptimization.IntheRLHFstage, modelbuiltonLLaMA,butwithaspecificfocusoninteraction the accumulation of human feedback for revising the reward data that include user inputs and responses generated by highly model is crucial to prevent the reward model from being capable closed-source chat models such as ChatGPT. The changed too much, which could hurt the stability of LLaMA Koala-13B model performs competitively with state-of-the-art model training. chat models according to human evaluation based on real- world user prompts. Mistral-7B [65] is a 7B-parameter language model engi- neered for superior performance and efficiency. Mistral-7B outperforms the best open-source 13B model (LLaMA-2-13B) across all evaluated benchmarks, and the best open-source 34Bmodel(LLaMA-34B)inreasoning,mathematics,andcode generation. This model leverages grouped-query attention for faster inference, coupled with sliding window attention to effectively handle sequences of arbitrary length with a reduced inference cost. TheLLaMAfamilyisgrowingrapidly,asmoreinstruction- Fig. 12: Training of LLaMA-2 Chat. Courtesy of [61]. following models have been built on LLaMA or LLaMA- 2, including Code LLaMA [66], Gorilla [67], Giraffe [68], Vigogne
inference cost. TheLLaMAfamilyisgrowingrapidly,asmoreinstruction- Fig. 12: Training of LLaMA-2 Chat. Courtesy of [61]. following models have been built on LLaMA or LLaMA- 2, including Code LLaMA [66], Gorilla [67], Giraffe [68], Vigogne [69], Tulu 65B [70], Long LLaMA [71], and Stable Alpaca [62] is fine-tuned from the LLaMA-7B model using Beluga2 [72], just to name a few. 52K instruction-following demonstrations generated in the 3) The PaLM Family: The PaLM (Pathways Language style of self-instruct using GPT-3.5 (text-davinci-003). Alpaca Model) family are developed by Google. The first PaLM is very cost-effective for training, especially for academic model [31] was announced in April 2022 and remained private research. On the self-instruct evaluation set, Alpaca performs until March 2023. It is a 540B parameter transformer-based similarly to GPT-3.5, despite that Alpaca is much smaller. LLM. The model is pre-trained on a high-quality text corpus The Vicuna team has developed a 13B chat model, Vicuna- consisting of 780 billion tokens that comprise a wide range 13B, by fine-tuning LLaMA on user-shared conversations of natural language tasks and use cases. PaLM is pre-trainedon 6144 TPU v4 chips using the Pathways system, which [77]. Med-PaLM 2 scored up to 86.5% on the MedQA enables highly efficient training across multiple TPU Pods. dataset (i.e., a benchmark combining six existing open ques- PaLM demonstrates continued benefits of scaling by achiev- tion answering datasets spanning professional medical exams, ing state-of-the-art few-shot learning results on hundreds of research, and consumer queries), improving upon Med-PaLM language understanding and generation benchmarks. PaLM- by over 19% and setting a new state-of-the-art. 540B outperforms not only state-of-the-art fine-tuned models on a suite of multi-step reasoning tasks, but also on par with C. Other Representative LLMs humans on the recently released BIG-bench benchmark. The U-PaLM models of 8B, 62B, and 540B scales are In addition to the models discussed in the previous sub- continually trained on PaLM with UL2R, a method of continue sections, there are other popular LLMs which do not belong training LLMs on a few steps with UL2’s mixture-of-denoiser to those three model families, yet they have achieved great objective [73]. An approximately 2x computational savings performance and have pushed the LLMs field forward. We rate is reported. briefly describe these LLMs in this subsection. U-PaLM is later instruction-finetuned as Flan-PaLM [74]. FLAN: In [78], Wei et al. explored a simple method for Compared to other instruction finetuning work mentioned improving the zero-shot learning abilities of language models. above, Flan-PaLM’s finetuning is performed using a much They showed that instruction tuning language models on a larger number of tasks, larger model sizes, and chain-of- collection of datasets described via instructions substantially thought data. As a result, Flan-PaLM substantially outperforms improves zero-shot performance on unseen tasks. They take previous instruction-following models. For instance, Flan- a 137B parameter pretrained language model and instruction PaLM-540B, which is instruction-
They showed that instruction tuning language models on a larger number of tasks, larger model sizes, and chain-of- collection of datasets described via instructions substantially thought data. As a result, Flan-PaLM substantially outperforms improves zero-shot performance on unseen tasks. They take previous instruction-following models. For instance, Flan- a 137B parameter pretrained language model and instruction PaLM-540B, which is instruction-finetuned on 1.8K tasks, tuneitonover60NLPdatasetsverbalizedvianaturallanguage outperforms PaLM-540B by a large margin (+9.4% on av- instruction templates. They call this instruction-tuned model erage). The finetuning data comprises 473 datasets, 146 task FLAN. Fig 15 provides a comparison of instruction tuning categories, and 1,836 total tasks, as illustrated in Fig 14. with pretrain–finetune and prompting. Fig. 15: comparison of instruction tuning with pre- train–finetune and prompting. Courtesy of [78]. Gopher: In [79], Rae et al. presented an analysis of Transformer-based language model performance across a wide range of model scales — from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. Fig. 14: Flan-PaLM finetuning consist of 473 datasets in above These models were evaluated on 152 diverse tasks, achieving task categories. Courtesy of [74]. state-of-the-art performance across the majority. The number of layers, the key/value size, and other hyper-parameters of different model sizes are shown in Fig 16. PaLM-2 [75] is a more compute-efficient LLM with bet- ter multilingual and reasoning capabilities, compared to its predecessor PaLM. PaLM-2 is trained using a mixture of objectives. Through extensive evaluations on English, multi- lingual, and reasoning tasks, PaLM-2 significantly improves the model performance on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference than PaLM. Med-PaLM [76] is a domain-specific PaLM, and is de- Fig. 16: Model architecture details of Gopher with different signed to provide high-quality answers to medical questions. number of parameters. Courtesy of [78]. Med-PaLM is finetuned on PaLM using instruction prompt tuning, a parameter-efficient method for aligning LLMs to new domains using a few exemplars. Med-PaLM obtains very T0: In [80], Sanh et al. developed T0, a system for easily encouraging results on many healthcare tasks, although it is mapping any natural language tasks into a human-readable still inferior to human clinicians. Med-PaLM 2 improves Med- prompted form. They converted a large set of supervised PaLM via med-domain finetuning and ensemble prompting datasets, each with multiple prompts with diverse wording.These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. Then, a T0 encoder-decoder model is developed to consume textual inputs and produces target responses. The model is trained on a multitask mixture of NLP datasets partitioned into different t
prompted form. They converted a large set of supervised PaLM via med-domain finetuning and ensemble prompting datasets, each with multiple prompts with diverse wording.These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. Then, a T0 encoder-decoder model is developed to consume textual inputs and produces target responses. The model is trained on a multitask mixture of NLP datasets partitioned into different tasks. ERNIE 3.0: In [81], Sun et al. proposed a unified frame- work named ERNIE 3.0 for pre-training large-scale knowledge enhanced models. It fuses auto-regressive network and auto- encoding network, so that the trained model can be easily tai- Fig. 18: Retro architecture. Left: simplified version where a lored for both natural language understanding and generation sequence of length n = 12 is split into l = 3 chunks of size tasksusingzero-shotlearning,few-shotlearningorfine-tuning. m = 4. For each chunk, we retrieve k = 2 neighbours of r = They have trained ERNIE 3.0 with 10 billion parameters 5 tokens each. The retrieval pathway is shown on top. Right: on a 4TB corpus consisting of plain texts and a large-scale Details of the interactions in the CCA operator. Causality is knowledge graph. Fig 17 illustrates the model architecture of maintained as neighbours of the first chunk only affect the last Ernie 3.0. token of the first chunk and tokens from the second chunk. Courtesy of [82]. Fig. 17: High-level model architecture of ERNIE 3.0. Courtesy of [81]. RETRO: In [82], Borgeaud et al. enhanced auto-regressive Fig. 19: GLaM model architecture. Each MoE layer (the language models by conditioning on document chunks re- bottom block) is interleaved with a Transformer layer (the trieved from a large corpus, based on local similarity with pre- upper block). Courtesy of [84]. ceding tokens. Using a 2-trillion-token database, the Retrieval- Enhanced Transformer (Retro) obtains comparable perfor- mance to GPT-3 and Jurassic-1 [83] on the Pile, despite using They showed that fine-tuning with annotated data and enabling 25% fewer parameters. As shown in Fig 18, Retro combines the model to consult external knowledge sources can lead to a frozen Bert retriever, a differentiable encoder and a chunked significant improvements towards the two key challenges of cross-attention mechanism to predict tokens based on an order safety and factual grounding. of magnitude more data than what is typically consumed during training. OPT: In [86], Zhang et al. presented Open Pre-trained GLaM: In [84], Du et al. proposed a family of LLMs Transformers (OPT), a suite of decoder-only pre-trained trans- named GLaM (Generalist Language Model), which use a formers ranging from 125M to 175B parameters, which they sparsely activated mixture-of-experts architecture to scale the share with researchers. The OPT models’ parameters are model capacity while also incurring substantially less training shown in 20 cost compared to dense variants. The largest GLaM has 1.2 trillionparameters,whichisapproximately7xlargerthanGPT- 3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero, one and few-shot performance across 29 NLP tasks. Fig 19 shows the high-level architecture of GLAM. LaMDA: In [85], Thoppilan et al. presented LaMDA, a family of Transformer-based n
bstantially less training shown in 20 cost compared to dense variants. The largest GLaM has 1.2 trillionparameters,whichisapproximately7xlargerthanGPT- 3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero, one and few-shot performance across 29 NLP tasks. Fig 19 shows the high-level architecture of GLAM. LaMDA: In [85], Thoppilan et al. presented LaMDA, a family of Transformer-based neural language models special- Fig. 20: Different OPT Models’ architecture details. Courtesy ized for dialog, which have up to 137B parameters and are of [86]. pre-trained on 1.56T words of public dialog data and web text. Chinchilla: In [2], Hoffmann et al. investigated the optimal model size and number of tokens for training a transformer language model under a given compute budget. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, they found that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. They tested this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and Fig. 21: Sparrow pipeline relies on human participation to 4% more more data. continually expand a training set. Courtesy of [90]. Galactica: In [87], Taylor et al. introduced Galactica, a large language model that can store, combine and reason about scientific knowledge. They trained on a large scientific corpus effective. They proposed Mixture-of-Denoisers (MoD), a pre- of papers, reference material, knowledge bases and many other training objective that combines diverse pre-training paradigms sources.Galacticaperformedwellonreasoning,outperforming together. This framework is known as Unifying Language Chinchilla on mathematical MMLU by 41.3% to 35.7%, and Learning (UL2). An overview of UL2 pretraining paradigm PaLM 540B on MATH with a score of 20.4% versus 8.8%. is shown in Fig 21. CodeGen: In [88], Nijkamp et al. trained and released a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open sourced the training library JAX- FORMER. They showed the utility of the trained model by demonstrating that it is competitive with the previous state-of- the-art on zero-shot Python code generation on HumanEval. They further investigated the multi-step paradigm for program synthesis, where a single program is factorized into multi- ple prompts specifying sub-problems. They also constructed an open benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse problem sets that are factorized into multi-turn prompts. AlexaTM: In [89], Soltan et al. demonstrated that mul- Fig. 22: An overview of UL2 pretraining paradigm. Courtesy tilingual large-scale sequence-to-sequence (seq2seq) models, of [92]. pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various task. They trained a BLOOM: In [93], Scao et al. presented BLOOM, a 176B- 20 billion parameter multilingual seq2seq model called Alexa parameter open-access language model designed and built Teacher Model (AlexaTM 20B) and showed that it achieves thanks to a collaboration of hundreds of researchers. BLOOM state-of-the-art (SOTA) performance on 1-shot summarization is a decoder-only
decoder-only models on various task. They trained a BLOOM: In [93], Scao et al. presented BLOOM, a 176B- 20 billion parameter multilingual seq2seq model called Alexa parameter open-access language model designed and built Teacher Model (AlexaTM 20B) and showed that it achieves thanks to a collaboration of hundreds of researchers. BLOOM state-of-the-art (SOTA) performance on 1-shot summarization is a decoder-only Transformer language model trained on the tasks, outperforming a much larger 540B PaLM decoder ROOTS corpus, a dataset comprising hundreds of sources in model. AlexaTM consist of 46 encoder layers, 32 decoder 46 natural and 13 programming languages (59 in total). An layers, 32 attention heads, and dmodel =4096 . overview of BLOOM architecture is shown in Fig 23. Sparrow: In [90], Glaese et al. presented Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. They used reinforcement learning from human feed- back to train their models with two new additions to help human raters judge agent behaviour. The high-level pipeline of Sparrow model is shown in Fig 21. Minerva: In [91], Lewkowycz et al. introduced Minerva, a large language model pretrained on general natural language data and further trained on technical content, to tackle previous LLM struggle with quantitative reasoning (such as solving mathematics, science, and engineering problems). Fig. 23: An overview of BLOOM architecture. Courtesy of MoD: In [92], Tay et al. presented a generalized and [93]. unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be GLM: In [94], Zeng et al. introduced GLM-130B, abilingual (English and Chinese) pre-trained language model FuseLLM-7B [120], TinyLlama-1.1B [121], LLaMA-Pro-8B with 130 billion parameters. It was an attempt to open-source [122]. a 100B-scale model at least as good as GPT-3 (davinci) and Fig 24 provides an overview of some of the most repre- unveil how models of such a scale can be successfully pre- sentative LLM frameworks, and the relevant works that have trained. contributed to the success of LLMs and helped to push the Pythia: In [95], Biderman et al. introduced Pythia, a suite limits of LLMs. of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the III. HOW LLMS ARE BUILT 16 models, alongside tools to download and reconstruct their In this section, we first review the popular architectures exact training dataloaders for further study. used for LLMs, and then discuss data and modeling techniques Orca: In [96], Mukherjee et al. develop Orca, a 13-billion ranging from data preparation, tokenization, to pre-training, parameter model that learns to imitate the reasoning process instruction tuning, and alignment. of large foundation models. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought Once the model architecture is chosen, the major steps processes; and other complex instructions, guided by teacher involved in training an LLM includes: data preparation (col- assistance from ChatGPT. lection, cleaning, deduping, etc.
instruction tuning, and alignment. of large foundation models. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought Once the model architecture is chosen, the major steps processes; and other complex instructions, guided by teacher involved in training an LLM includes: data preparation (col- assistance from ChatGPT. lection, cleaning, deduping, etc.), tokenization, model pre- StarCoder: In [97], Li et al. introduced StarCoder and training (in a self-supervised learning fashion), instruction StarCoderBase. They are 15.5B parameter models with 8K tuning, and alignment. We will explain each of them in a context length, infilling capabilities and fast large-batch in- separate subsection below. These steps are also illustrated in ference enabled by multi-query attention. StarCoderBase is Fig 25. trained on one trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories A. Dominant LLM Architectures with inspection tools and an opt-out process. They fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation The most widely used LLM architectures are encoder-only, of StarCoder. They performed the most comprehensive evalu- decoder-only, and encoder-decoder. Most of them are based on ation of Code LLMs to date and showed that StarCoderBase Transformer (as the building block). Therefore we also review outperforms every open Code LLM that supports multiple pro- the Transformer architecture here. gramming languages and matches or outperforms the OpenAI 1) Transformer: in a ground-breaking work [44], Vaswani code-cushman-001 model. et al. proposed the Transformer framework, which was orig- KOSMOS: In [98], Huang et al. introduced KOSMOS-1, inally designed for effective parallel computing using GPUs. a Multimodal Large Language Model (MLLM) that can per- The heart of Transformer is the (self-)attention mechanism, ceive general modalities, learn in context (i.e., few-shot), and which can capture long-term contextual information much follow instructions (i.e. zero-shot). Specifically, they trained more effectively using GPUs than the recurrence and convo- KOSMOS-1 from scratch on web-scale multi-modal corpora, lution mechanisms. Fig 26 provides a high-level overview of includingarbitrarilyinterleavedtextandimages,image-caption transformerwork.Inthissectionweprovideanoverviewofthe pairs, and text data. Experimental results show that KOSMOS- main elements and variants, see [44], [123] for more details. 1 achieves impressive performance on (i) language understand- ing, generation, and even OCR-free NLP (directly fed with The Transformer language model architecture, originally document images), (ii) perception-language tasks, including proposed for machine translation, consists of an encoder and multimodal dialogue, image captioning, visual question an- a decoder. The encoder is composed of a stack of N = 6 swering, and (iii) vision tasks, such as image recognition with identical Transformer layers. Each layer has two sub-layers. descriptions (specifying classification via text instructions). The first one is a multi-head self-attention layer, and the other Gemini: In [99], Gemini team introduced a new family of one is a simple position-wise fully connected feed-forward multimodal models, that exhibit promising capabilities across
N = 6 swering, and (iii) vision tasks, such as image recognition with identical Transformer layers. Each layer has two sub-layers. descriptions (specifying classification via text instructions). The first one is a multi-head self-attention layer, and the other Gemini: In [99], Gemini team introduced a new family of one is a simple position-wise fully connected feed-forward multimodal models, that exhibit promising capabilities across network. The decoder is composed of a stack of 6 identical image, audio, video, and text understanding. Gemini family layers. In addition to the two sub-layers in each encoder layer, includes three versions: Ultra for highly-complex tasks, Pro the decoder has a third sub-layer, which performs multi-head for enhanced performance and deployability at scale, and Nano attention over the output of the encoder stack. The attention for on-device applications. Gemini architecture is built on top function can be described as mapping a query and a set of key- of Transformer decoders, and is trained to support 32k context value pairs to an output, where the query, keys, values, and length (via using efficient attention mechanisms). output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value Some of the other popular LLM frameworks (or techniques is computed by a compatibility function of the query with the used for efficient developments of LLMs) includes Inner- corresponding key. Instead of performing a single attention Monologue [100], Megatron-Turing NLG [101], LongFormer function with dmodel dimensional keys, values and queries, [102], OPT-IML [103], MeTaLM [104], Dromedary [105], it is found to be beneficial to linearly project the queries, Palmyra [106], Camel [107], Yalm [108], MPT [109], ORCA- keys and values h with different, learned linear projections to 2 [110], Gorilla [67], PAL [111], Claude [112], CodeGen 2 dk, dk and dv dimensions, respectively. Positional encoding is [113], Zephyr [114], Grok [115], Qwen [116], Mamba [30], incorporated to fuse information about the relative or absolute Mixtral-8x7B [117], DocLLM [118], DeepSeek-Coder [119], position of the tokens in the sequence.Fig. 24: Timeline of some of the most representative LLM frameworks (so far). In addition to large language models with our #parameters threshold, we included a few representative works, which pushed the limits of language models, and paved the way for their success (e.g. vanilla Transformer, BERT, GPT-1), as well as some small language models. ♣ shows entities that serve not only as models but also as approaches. ♦ shows only approaches. 2) Encoder-Only: For this family, at each stage, the atten- mask word replaces. Encoder-decoder models are best suited tion layers can access all the words in the initial sentence. for tasks about generating new sentences conditioned on a The pre-training of these models usually consist of some- given input, such as summarization, translation, or generative how corrupting a given sentence (for instance, by masking question answering. random words in it) and tasking the model with finding or reconstructing the initial sentence. Encoder models are great B. Data Cleaning for tasks requiring an understanding of the full sequence, such as sentence classification, named entity recognition, and Data quality is crucial to the performance of language extractive question answering. One promine
corrupting a given sentence (for instance, by masking question answering. random words in it) and tasking the model with finding or reconstructing the initial sentence. Encoder models are great B. Data Cleaning for tasks requiring an understanding of the full sequence, such as sentence classification, named entity recognition, and Data quality is crucial to the performance of language extractive question answering. One prominent encoder only models trained on them. Data cleaning techniques such as model is BERT (Bidirectional Encoder Representations from filtering, deduplication, are shown to have a big impact on Transformers), proposed in [24]. the model performance. 3) Decoder-Only: For these models, at each stage, for any As an example, in Falcon40B [124], Penedo et al. showed word, the attention layers can only access the words positioned that properly filtered and deduplicated web data alone can lead before that in the sentence. These models are also sometimes to powerful models; even significantly outperforming models called auto-regressive models. The pretraining of these models from the state-of-the-art trained on The Pile. Despite extensive is usually formulated as predicting the next word (or token) filtering, they were able to obtain five trillion tokens from in the sequence. The decoder-only models are best suited for CommonCrawl. They also released an extract of 600 billion tasks involving text generation. GPT models are prominent tokens from our REFINEDWEB dataset, and 1.3/7.5B param- example of this model category. eters language models trained on it. 27 shows the Refinement 4) Encoder-Decoder: These models use both encoder and process of CommonCrawl data by this work. decoder, and are sometimes called sequence-to-sequence mod- 1) Data Filtering: Data filtering aims to enhance the qual- els.Ateachstage,theattentionlayersoftheencodercanaccess ity of training data and the effectiveness of the trained LLMs. all the words in the initial sentence, whereas the attention Common data filtering techniques include: layersofthedecoderonlyaccessesthewordspositionedbefore a given word in the input. These models are usually pre- Removing Noise: refers to eliminating irrelevant or noisy trained using the objectives of encoder or decoder models, but data that might impact the model’s ability to generalize well. usually involve something a bit more complex. For instance, As an example, one can think of removing false information some models are pretrained by replacing random spans of text from the training data, to lower the chance of model generating (that can contain several words) with a single mask special false responses. Two mainstream approaches for quality filter- word, and the objective is then to predict the text that this ing includes: classifier-based, and heuristic-based frameworks.Fig. 25: This figure shows different components of LLMs. biasesinthemodeltrainingprocessandreducethediversity,as the model may learn from the same examples multiple times, potentially leading to overfitting on those particular instances. Some works [125] have shown that de-duplication improves
biasesinthemodeltrainingprocessandreducethediversity,as the model may learn from the same examples multiple times, potentially leading to overfitting on those particular instances. Some works [125] have shown that de-duplication improves models’ ability to generalize to new, unseen data. The de-duplication process is particularly important when dealing with large datasets, as duplicates can unintentionally inflate the importance of certain patterns or characteristics. This is especially relevant in NLP tasks, where diverse and representative training data is crucial for building robust lan- guage models. The specific de-duplication method can vary based on the nature of the data and the requirements of the particular language model being trained. It may involve comparing entire data points or specific features to identify and eliminate du- plicates. At the document level, existing works mainly rely on the overlap ratio of high-level features (e.g. n-grams overlap) between documents to detect duplicate samples. C. Tokenizations Fig. 26: High-level overview of transformer work. Courtesy of Tokenization referes to the process of converting a se- [44]. quence of text into smaller parts, known as tokens. While the simplest tokenization tool simply chops text into tokens based on white space, most tokenization tools rely on a word dictionary. However, out-of-vocabulary (OOV) is a problem in this case because the tokenizer only knows words in its dictionary. To increase the coverage of dictionaries, popular tokenizers used for LLMs are based on sub-words, which can be combined to form a large number of words, including the words unseen in training data or words in different languages. In what follows, we describe three popular tokenizers.
tokenizers used for LLMs are based on sub-words, which can be combined to form a large number of words, including the words unseen in training data or words in different languages. In what follows, we describe three popular tokenizers. 1) BytePairEncoding: BytePairEncoding is originally a type of data compression algorithm that uses frequent patterns at byte level to compress the data. By definition, this algorithm mainly tries to keep the frequent words in their original form Fig. 27: Subsequent stages of Macrodata Refinement remove and break down ones that are not common. This simple nearly 90% of the documents originally in CommonCrawl. paradigm keeps the vocabulary not very large, but also good Courtesy of [124]. enough to represent common words at the same time. Also morphological forms of the frequent words can be represented very well if suffix or prefix is also commonly presented in the training data of the algorithm. Handling Outliers: Identifying and handling outliers or 2) WordPieceEncoding: This algorithm is mainly used for anomalies in the data to prevent them from disproportionately very well-known models such as BERT and Electra. At the influencing the model. beginning of training, the algorithm takes all the alphabet from Addressing Imbalances: Balancing the distribution of the training data to make sure that nothing will be left as UNK classes or categories in the dataset to avoid biases and ensure or unknown from the training dataset. This case happens when fair representation. This is specially useful for responsible the model is given an input that can not be tokenized by the model training and evaluation. tokenizer.Itmostlyhappensincaseswheresomecharactersare Text Preprocessing: Cleaning and standardizing text data not tokenizable by it. Similar to BytePairEncoding, it tries to by removing stop words, punctuation, or other elements that maximize the likelihood of putting all the tokens in vocabulary may not contribute significantly to the model’s learning. based on their frequency. Dealing with Ambiguities: Resolving or excluding am- 3) SentencePieceEncoding: Although both tokenizers de- biguous or contradictory data that might confuse the model scribed before are strong and have many advantages compared during training. This can help the model to provide more to white-space tokenization, they still take assumption of definite and reliable answers. words being always separated by white-space as granted. This assumptionisnotalwaystrue,infactinsomelanguages,words 2) Deduplication: De-duplication refers to the process of can be corrup
training. This can help the model to provide more to white-space tokenization, they still take assumption of definite and reliable answers. words being always separated by white-space as granted. This assumptionisnotalwaystrue,infactinsomelanguages,words 2) Deduplication: De-duplication refers to the process of can be corrupted by many noisy elements such as unwanted removing duplicate instances or repeated occurrences of the spaces or even invented words. SentencePieceEncoding tries same data in a dataset. Duplicate data points can introduce to address this issue.D. Positional Encoding In Autoregressive Language Modeling framework, given 1) Absolute Positional Embeddings: (APE) [44] has been a sequence of n tokens x 1 , ..., x n , the model tries to predict used in the original Transformer model to preserve the infor- next token x n +1 (and sometimes next sequence of tokens) in mationofsequenceorder.Therefore,thepositionalinformation an auto-regressive fashion. One popular loss function in this of words is added to the input embeddings at the bottom of case is the log-likelihood of predicted tokens as shown in Eq both the encoder and decoder stacks. There are various options 2 NX for positional encodings, either learned or fixed. In the vanilla L ALM (x)= p(x i+ n|x i,...,xi+ n− 1) (1) Transformer, sine and cosine functions are employed for this i=1 purpose. The main drawback of using APE in Transformers Given the auto-regressive nature of this framework, the is the restriction to a certain number of tokens. Additionally, decoder-only models are naturally better suited to learn how APE fails to account for the relative distances between tokens. to accomplish these task. 2) Relative Positional Embeddings: (RPE) [126] involves In Masked Language Modeling, some words are masked extending self-attention to take into account the pairwise links in a sequence and the model is trained to predict the masked between input elements. RPE is added to the model at two words based on the surrounding context. Sometimes people levels: first as an additional component to the keys, and refer to this approach as denoising autoencoding, too. If we subsequently as a sub-component of the values matrix. This denote the masked/corrupted samples in the sequence x , as ˜x , approach looks at the input as a fully-connected graph with then the training objective of this approach can be written as: labelsanddirectededges.Inthecaseoflinearsequences,edges can capture information about the relative position differences NX between input elements. A clipping distance, represented as k L MLM (x)= p(˜x|x\˜x) (2) 2 ≤ k ≤ n − 4, specifies the maximum limit on relative lo- i=1 cations. This allows the model to make reasonable predictions And more recently, Mixture of Experts (MoE) [130], for sequence lengths that are not part of the training data. [131] have become very popular in LLM space too. MoEs 3) Rotary Position Embeddings: Rotary Positional Em- enable models to be pre-trained with much less compute, bedding (RoPE) [127] tackles problems with existing a
i=1 cations. This allows the model to make reasonable predictions And more recently, Mixture of Experts (MoE) [130], for sequence lengths that are not part of the training data. [131] have become very popular in LLM space too. MoEs 3) Rotary Position Embeddings: Rotary Positional Em- enable models to be pre-trained with much less compute, bedding (RoPE) [127] tackles problems with existing ap- which means one can dramatically scale up the model or proaches. Learned absolute positional encodings can lack gen- dataset size with the same compute budget as a dense model. eralizability and meaningfulness, particularly when sentences MoE consists of two main elements: Sparse MoE layers, are short. Moreover, current methods like T5’s positional which are used instead of dense feed-forward network (FFN) embedding face challenges with constructing a full attention layers, and have a certain number of “experts” (e.g. 8), in matrix between positions. RoPE uses a rotation matrix to which each expert is a neural network. In practice, the experts encode the absolute position of words and simultaneously in- are FFNs, but they can also be more complex networks. A gate cludes explicit relative position details in self-attention. RoPE network or router, that determines which tokens are sent to brings useful features like flexibility with sentence lengths, a which expert. It is worth noting that, one can send a token decrease in word dependency as relative distances increase, to more than one expert. How to route a token to an expert and the ability to improve linear self-attention with relative is one of the big decisions when working with MoEs - the position encoding. GPT-NeoX-20B, PaLM, CODEGEN, and router is composed of learned parameters and is pretrained at LLaMA are among models that take advantage of RoPE in the same time as the rest of the network. Fig 29 provides an their architectures. illustration of a Switch Transformer encoder block, which are 4) Relative Positional Bias: The concept behind this type used in MoE. of positional embedding is to facilitate extrapolation during F. Fine-tuning and Instruction Tuning inference for sequences longer than those encountered in train- ing. In [128] Press et al. proposed Attention with Linear Biases Early language models such as BERT trained using self- (ALiBi). Instead of simply adding positional embeddings to supervision as explained in section III-E were not able to wordembeddings,theyintroducedabiastotheattentionscores perform specific tasks. In order for the foundation model to be of query-key pairs, imposing a penalty proportional to their useful it needed to be fine-tuned to a specific task with labeled distance. In the BLOOM model, ALiBi is leveraged. data (so-called supervised fine-tuning or SFT for short). For example, in the original BERT paper [24], the model was fine- E. Model Pre-training tuned to 11 different tasks. While more recent LLMs no longer require fine-tuning to be used, they can still benefit from task Pre-training is the very first step in large language model or data-specific fine-tuning. For example, OpenAI reports that training pipeline, and it helps LLMs to acquire fundamental
Model Pre-training tuned to 11 different tasks. While more recent LLMs no longer require fine-tuning to be used, they can still benefit from task Pre-training is the very first step in large language model or data-specific fine-tuning. For example, OpenAI reports that training pipeline, and it helps LLMs to acquire fundamental the much smaller GPT-3.5 Turbo model can outperform GPT-4 language understanding capabilities, which can be useful in a when fine-tuned with task specific data 2. wide range of language related tasks. During pre-training, the Fine-tuning does not need to be performed to a single LLM is trained on a massive amount of (usually) unlabeled task though, and there are different approaches to multi-task texts, usually in a self-supervised manner. There are different fine-tuning (see e.g. Mahabi et al. [132]). Fine-tuning to one approaches used for pre-training like next sentence prediction or more tasks is known to improve results and reduce the [24], two most common ones include, next token prediction complexity of prompt engineering, and it can serve as an (autoregressive language modeling), and masked language modeling. 2https://platform.openai.com/docs/guides/fine-tuning (a) Absolute Positional Embeddings [129] (b) Relative Positional Embeddings (c) Rotary Positional Embedding [127] (d) Relative Positional Bias [128] Fig. 28: Various positional encodings are employed in LLMs. Instructions [134] include not only the task definition but other components such as positive/negative examples or things to avoid. The specific approach and instruction datasets used to instruction-tune an LLM varies, but, generally speaking, in- struction tuned models outperform their original foundation models they are based on. For example, InstructGPT [59] outperforms GPT-3 on most benchmarks. The same is true for Alpaca [62] when compared to LLaMA. Self-Instruct [135], proposed by Wang et al. is also a popular approach along this line, in which they introduced a Fig. 29: : Illustration of a Switch Transformer encoder block. framework for improving the instruction-following capabilities They replaced the dense feed forward network (FFN) layer of pre-trained language models by bootstrapping their own present in the Transformer with a sparse Switch FFN layer generations. Their pipeline generates instructions, input, and (light blue). . Courtesy of [131]. output samples from a language model, then filters invalid or
improving the instruction-following capabilities They replaced the dense feed forward network (FFN) layer of pre-trained language models by bootstrapping their own present in the Transformer with a sparse Switch FFN layer generations. Their pipeline generates instructions, input, and (light blue). . Courtesy of [131]. output samples from a language model, then filters invalid or similar ones before using them to fine tune the original model. G. Alignment alternative to retrieval augmented generation. Furthermore, AI Alignment is the process of steering AI systems towards there are other reasons why it might be advisable to fine-tune. human goals, preferences, and principles. LLMs, pre-trained For example, one might want to fine-tune to expose the model for word prediction, often exhibit unintended behaviors. For to new or proprietary data that it has not been exposed to example, they might generate contents that are toxic, harmful, during pre-training. misleading and biased. An important reason to fine-tune LLMs is to align the Instruction tuning, discussed above, gets LLMs a step responsestotheexpectationshumanswillhavewhenproviding closertobeingaligned.However,inmanycases,itisimportant instructions through prompts. This is the so-called instruction to include further steps to improve the alignment of the model tuning [133]. We dive into the details of how to design and avoid unintended behaviors 3. We review the most popular and engineer prompts in section IV-B, but in the context of instruction tuning, it is important to understand that the 3According to very recent research by Ethayarajh et al. [136], further instruction is a prompt that specifies the task that the LLM alignment besides SFT mainly improves models of at least 7B parameters. should accomplish. Instruction tuning datasets such as Natural For smaller models, SFT is sufficient.approaches to alignment in this subsection. yw ). Fig 31 shows a high-level comparison between KTO and RLHF (reinforcement learning from human feedback) and other alignment approaches discussed above. RLAIF (reinforcement learning from AI feedback) are two popular approaches. RLHF uses a reward model to learn alignment from human feedback. This reward model, after being tuned, is able to rate different outputs and score them according to their alignment preferences given by humans. The reward model gives feedback to the original LLM and this feedback is used to tune the LLM further [137]. Reinforcement learning from AI feedback on the other hand, directly connects a pretrained and well-aligned model to the LLM and helps it to learn from larger and more aligned models [138]. In another recent work (known as DPO) [139], Rafailov Fig. 31: LLM alignment involves supervised finetuning fol- et al. discussed that RLHF is a complex and often unstable lowed by optimizing a human-centered loss (HALO). How- procedure, and tried to address this with a new approach. They ever, the paired preferences that existing approaches need are leveraged a mapping between reward functions and optimal hard-to-obtain. In contrast, KTO uses a far more abundant policies to show that this constrained reward maximization kind of data, making it much easier to use in the real world. problem can be optimized exa
wed by optimizing a human-centered loss (HALO). How- procedure, and tried to address this with a new approach. They ever, the paired preferences that existing approaches need are leveraged a mapping between reward functions and optimal hard-to-obtain. In contrast, KTO uses a far more abundant policies to show that this constrained reward maximization kind of data, making it much easier to use in the real world. problem can be optimized exactly with a single stage of policy Courtesy of [136]. training, essentially solving a classification problem on the human preference data. The resulting algorithm, which they called Direct Preference Optimization (DPO), is stable, per- formant,andcomputationallylightweight,eliminatingtheneed H. Decoding Strategies for fitting a reward model, sampling from the LM during fine- Decoding refers to the process of text generation using pre- tuning, or performing significant hyperparameter tuning. They trained LLMs. Given an input prompt, the tokenizer translates observed that fine-tuning with DPO exceeds RLHF’s ability to each token in the input text into a corresponding token ID. controlsentimentofgenerationsandimprovesresponsequality Then, the language model uses these token IDs as input and in summarization. Fig 30 shows the high-level comparison predicts the next most likely token (or a sequence of tokens). between DPO vs RLHF. Finally, the model generates logits, which are converted to probabilities using a softmax function. Different decoding strategies have been proposed. Some of the most popular ones are greedy search, beam search, as well as different sample techniques such as top-K, top-P (Nucleus sampling). 1) Greedy Search: Greedy search takes the most probable Fig. 30: DPO optimizes for human preferences while avoiding token at each step as the next token in the sequence, discarding reinforcement learning. Existing methods for fine-tuning lan- all other potential options. As you can imagine, this is a simple guage models with human feedback first fit a reward model approach and can loose a lot of temporal consistency and to a dataset of prompts and human preferences over pairs of coherency. It only considers the most probable token at each responses, and then use RL to find a policy that maximizes step, without considering the overall effect on the sequence. the learned reward. In contrast, DPO directly optimizes for This property makes it fast, but it also means that it can miss the policy best satisfying the preferences with a simple classi- out on better sequences that might have appeared with slightly fication objective, without an explicit reward function or RL. less probable next tokens. Courtesy of [139]. 2) Beam Search: Unlike greedy search that only considers the next most probable token, beam search takes into account the N most likely tokens, where N denotes the number of Even more recently Ethayarajh et al. proposed a new align- beams. This procedure is repeated until a predefined maxi- men
2) Beam Search: Unlike greedy search that only considers the next most probable token, beam search takes into account the N most likely tokens, where N denotes the number of Even more recently Ethayarajh et al. proposed a new align- beams. This procedure is repeated until a predefined maxi- ment approach called the Kahneman-Tversky Optimization mum sequence length is reached or an end-of-sequence token (KTO) [136]. Unlike existing state-of-the-art approaches, KTO appears. At this point, the sequence of tokens (AKA “beam”) does not require paired preference data (x , yw , yl), and it with the highest overall score is chosen as the output. For only needs (x,y) and knowledge of whether y is desirable or example for beam size of 2 and maximum length of 5, undesirable. KTO-aligned models are shown to be good or the beam search needs to keep track of 2 5 = 32 possible better than DPO-aligned models at scales from 1B to 30B, sequences. So it is more computationally intensive than greedy despite not using paired preferences. KTO is also far easier to search. use in the real world than preference optimization methods, as 3) Top-k Sampling: Top-k sampling is a technique that the kind of data it needs is far more abundant. As an example, uses the probability distribution generated by the language every retail company has a lot of customer interaction data and model to select a token randomly from the k most likely whether that interaction was successful (e.g., purchase made) options. or unsuccessful (e.g., no purchase made). However, They have little to no counterfactual data (i.e., what would have made Suppose we have 6 tokens (A, B, C, D, E, F) and k=2, an unsuccessful customer interaction yl into a successful one and P(A)= 30%, and P(B)= 20%, P(C)= P(D)= P(E)= P(F)=12.5%. In top-k sampling, tokens C, D, E, F are disregarded, RWKV: In [141], Peng et al. proposed a novel model and the model outputs A 60% of the time, and B, 40% of architecture, Receptance Weighted Key Value (RWKV), that the time. This approach ensures that we prioritize the most combines the efficient parallelizable training of Transformers probable tokens while introducing an element of randomness with the efficient inference of RNNs. Their approach leverages in the selection process. a linear attention mechanism and allows them to formulate the The randomness is usually introduced via the concept of model as either a Transformer or an RNN, which parallelizes temperature.ThetemperatureTisaparameterthatrangesfrom computations during training and maintains constant compu- 0 to 1, which affects the probabilities generated by the softmax tational and memory complexity during inference, leading to function, making the most likely tokens more influential. In the first non-transformer architecture to be scaled to tens of practice, it simply consists of dividing the input logits by billions of parameters. RWKV architecture is shown in Fig temperature value: 32. The Time Complexity comparison of RWKV with different softmax (x i)= ex i/TP j ex j/T (3) A low temperature setting significantly alters the proba- bility distribution (and is commonly used i
, it simply consists of dividing the input logits by billions of parameters. RWKV architecture is shown in Fig temperature value: 32. The Time Complexity comparison of RWKV with different softmax (x i)= ex i/TP j ex j/T (3) A low temperature setting significantly alters the proba- bility distribution (and is commonly used in text generation to control the level of “creativity” in the generated output), while a large temperature prioritizes the tokens with higher probabilities. Top-k is a creative way of sampling, and can be used along with beam search. The sequence chosen by top- k sampling may not be the sequence with highest probability in beam search. But it’s important to remember that highest scores do not always lead to more realistic or meaningful sequences. 4) Top-p Sampling: Top-p sampling, also known as Nu- cleus sampling, takes a slightly different approach from top-k sampling. Instead of selecting the top k most probable tokens, nucleus sampling chooses a cutoff value p such that the sum of the probabilities of the selected tokens exceeds p. This forms a “nucleus” of tokens from which to randomly choose the next token. In other words, in top-p sampling the language model examines the most probable tokens in descending order and Fig. 32: RWKV architecture. Courtesy of [141]. keeps adding them to the list until the sum of probabilities surpasses the threshold p. As you can imagine, this could be better specially for scenarios in which top-k tokens do not have Transformers are provided in Fig 33. a large probability mass. Unlike top-k sampling, the number of tokens included in the nucleus sampling is not fixed. This variability often results in a more diverse and creative output, making nucleus sampling popular for text generation related tasks. I. Cost-Effective Training/Inference/Adaptation/Compression In this part, we review some of the popular approaches used for more cost-friendly (and compute-friendly) training and usage of LLMs. Fig.33:TimeComplexitycomparisonofRWKVwithdifferent 1) Optimized Training: There are many frameworks de- Transformers. Here T denotes the sequence length, d the veloped for optimized training of LLMs, here we introduce feature dimension, and c is MEGA’s chunk size of quadratic some of the prominent ones. attention. Courtesy of [141]. ZeRO: In [140], Rajbhandari et al. developed a novel solution, Zero Redundancy Optimizer (ZeRO), to optimize memory, vastly improving training speed of LLMs while 2) Low-Rank Adaption (LoRA): Low-Rank Adaptation is increasing the model size that can be efficiently trained. ZeRO a popular and lightweight training technique that significantly eliminates memory redundancies in data- and model-parallel reduces the number of trainable parameters, and is based training while retaining low communication volume and high on a crucial insight that the difference between the fine- computational granularity, allowing one to scale the model tuned weights for a specialized task and the initial pre-trained size proportional to the number of devices with sustained high weights often exhibits “low intrinsic rank” - meaning that efficiency. it can be approximated well by a low rank matrix [142]. Fig. 35: A generic knowledge distillation framework with Fig. 34: An illustration of LoRA reparametrizan. Only A and
ed size proportional to the number of devices with sustained high weights often exhibits “low intrinsic rank” - meaning that efficiency. it can be approximated well by a low rank matrix [142]. Fig. 35: A generic knowledge distillation framework with Fig. 34: An illustration of LoRA reparametrizan. Only A and student and teacher (Courtesy of [144]). B trained during this process. Courtesy of [142]. Knowledge can be transferred by different forms of learn- Training with LoRA is much faster, memory-efficient, and ing: response distillation, feature distillation, and API distilla- produces smaller model weights (a few hundred MBs), that are tion. Response distillation is concerned only with the outputs easier to store and share. One property of low-rank matrices of the teacher model and tries to teach the student model is that they can be represented as the product of two smaller how to exactly or at least similarly perform (in the sense of matrices. This realization leads to the hypothesis that this delta prediction) as the teacher. Feature distillation not only uses between fine-tuned weights and initial pre-trained weights can the last layer but also intermediate layers as well to create a be represented as the matrix product of two much smaller betterinnerrepresentationforthestudentmodel.Thishelpsthe matrices. By focusing on updating these two smaller matrices smaller model to have a similar representation as the teacher rather than the entire original weight matrix, computational model. efficiency can be substantially improved. API distillation is the process of using an API (typically Specifically, for a pre-trained weight matrix W 0 ∈ R d× k, from an LLM provider such as OpenAI) to train smaller LoRA constrains its update by representing the latter with models. In the case of LLMs, it is used to train the model a low-rank decomposition W 0 + ∆ W = W 0 + BA , where from the direct output of the larger model which makes it very B ∈ R d× r , A ∈ R r× k, and the rank r ≪ min (d,k ). During similar to response distillation. Many concerns are raised by training, W 0 is frozen and does not receive gradient updates, this type of distillation because in cases where the model itself while A and B contain trainable parameters. It is worth is not openly available, a (usually) paid API is exposed for end mentioning that both W 0 and ∆ W = BA are multiplied with users. On the other hand, while users pay for each call, how to the same input, and their respective output vectors are summed use the predictions is limited, for example, OpenAI prohibits coordinate-wise. For h = W 0x , their modified forward pass usage of its API to create LLMs that later will be used to yields: h = W 0x +∆ Wx = W 0x + BAx . Usually a random compete with it. The main value in such case is training data. Gaussian initialization is used for A , and zero initialization 4) Quantization: deep learning in its core, is a set of for B , so ∆ W = BA is zero at the beginning of training. mathematical functions applied to matrices, with a specific They then scale ∆ Wx by αr , where α is a constant in r. This precision for model weights. Reducing the precision of the reparametrization is ill
e main value in such case is training data. Gaussian initialization is used for A , and zero initialization 4) Quantization: deep learning in its core, is a set of for B , so ∆ W = BA is zero at the beginning of training. mathematical functions applied to matrices, with a specific They then scale ∆ Wx by αr , where α is a constant in r. This precision for model weights. Reducing the precision of the reparametrization is illustrated in Figure 34 weights can be used to reduce the size of the model and also make it faster. As an example, Float-32 operations compared It is worth mentioning that LoRA can be applied to any a to Int-8 operations are slower. This process, which is called subset of weight matrices in a neural network to reduce the quantization, can be applied in different phases. Main ap- number of trainable parameters. In the Transformer architec- proaches for model quantization can be categorized as: post ture,therearefourweightmatricesintheself-attentionmodule training quantization and quantization-aware training. Post- (W q , W k, W v , W o), and two in the MLP module. Most of training quantization is concerned with quantized trained mod- the time, LoRA is focused on adapting the attention weights els in two well-known methods: dynamic and static. Dynamic only for downstream tasks, and freezes the MLP modules, so post-training quantization computes the range of quantization they are not trained in downstream tasks both for simplicity on the runtime and is slower compared to static. Quantization- and parameter-efficiency. aware training adds quantization criteria into training, and 3) Knowledge Distillation: Knowledge distillation is the a quantized model is trained and optimized during training process of learning from a larger model [143]. Earlier days of process. This approach ensures that the end model will have best-performing models release have proven that this approach good performance and also does not need to be quantized after is very useful even if it is used in an API distillation approach. training. It is also referred to as an approach to distill the knowledge of IV. HOW LLMS ARE USED AND AUGMENTED not a single model but in fact multiple models into a smaller one. Creating smaller models by this approach yields smaller Once the LLMs are trained, we can use them to generate model sizes that can be used even on edge devices. Knowledge desired outputs for a variety of tasks. LLMs can be used distillation as shown in Fig 35, illustrates a general setup of directly through basic prompting. However, in order to exploit this training scheme. their full potential or to address some of the shortcomings,we need to augment the models through some external means. 1) IntrinsicHallucinations:Thesedirectlyconflictwith In this section we first provide a brief overview of the main the source material, introducing factual inaccuracies shortcoming of LLMs, with a deeper look at the issue of or logical inconsistencies. hallucination. We then describe how prompting and some aug- 2) Extrinsic Hallucinations: These, while not contra- mentation approaches can not only address those limitations dicting, are unverifiable against the source, encom- but also
he main the source material, introducing factual inaccuracies shortcoming of LLMs, with a deeper look at the issue of or logical inconsistencies. hallucination. We then describe how prompting and some aug- 2) Extrinsic Hallucinations: These, while not contra- mentation approaches can not only address those limitations dicting, are unverifiable against the source, encom- but also be used to augment the capabilities of LLMs going passing speculative or unconfirmable elements. as far as turning an LLM into a full-blown AI agent with the ability to interface with the external world. The definition of ’source’ in LLM contexts varies with the task. In dialogue-based tasks, it refers to ’world knowledge’, A. LLM limitations whereas in text summarization, it pertains to the input text itself. This distinction plays a crucial role in evaluating and ItisimportanttorememberthatLLMsaretrainedtopredict interpreting hallucinations. The impact of hallucinations is also a token. While fine-tuning and alignment improves their per- highly context-dependent. For instance, in creative endeavors formance and adds different dimensions to their abilities, there like poem writing, hallucinations might be deemed acceptable are still some important limitations that come up, particularly or even beneficial. if they are used naively. Some of them include the following: LLMs, trained on diverse datasets including the internet, • They don’t have state/memory. LLMs on their own books, and Wikipedia, generate text based on probabilistic cannot remember even what was sent to them in the models without an inherent understanding of truth or falsity. previous prompt. That is an important limitation for Recent advancements like instruct tuning and Reinforcement many of the uses cases that require some form of state. Learning from Human Feedback (RLHF) have attempted to steer LLMs towards more factual outputs, but the fundamental • They are stochastic/probabilistic. If you send the same probabilistic nature and its inherent limitations remain. A prompt to an LLM several times, you are likely to get recent study, “Sources of Hallucination by Large Language different responses. While there are parameters, and Models on Inference Tasks” [146], highlights two key aspects in particular the temperature, to limit the variability contributing to hallucinations in LLMs: the veracity prior and in the response, this is an inherent property of their the relative frequency heuristic, underscoring the complexities training that can create issues. inherent in LLM training and output generation. • They have stale information and, on their own, don’t Effective automated measurement of hallucinations in have access to external data. An LLM on its own does LLMs requires a combination of statistical and model-based not even know about the current time or day and does metrics. nothaveaccesstoanyinformationthatwasnotpresent Statistical Metrics: in its training set. • They are generally very large. This means that many
Effective automated measurement of hallucinations in have access to external data. An LLM on its own does LLMs requires a combination of statistical and model-based not even know about the current time or day and does metrics. nothaveaccesstoanyinformationthatwasnotpresent Statistical Metrics: in its training set. • They are generally very large. This means that many • Metrics like ROUGE [147] and BLEU [148] are com- costly GPU machines are needed for training and mon for assessing text similarity, focusing on intrinsic serving. In some cases, largest models have poor hallucinations. SLAs, particularly in terms of latency. • Advanced metrics such as PARENT [149], PARENT- • They hallucinate. LLMs do not have a notion of T [150], and Knowledge F1 [151] are utilized when ”truth” and they have usually been trained on a mix structured knowledge sources are available. These of good and bad content. They can produce very metrics, while effective, have limitations in capturing plausible but untruthful answers. syntactic and semantic nuances. While the previous limitations can all become important Model-Based Metrics: for some applications, it is worth for us to dive a bit into the • IE-Based Metrics: Utilize Information Extraction last one, hallucinations, since it has gathered a lot of interest models to simplify knowledge into relational tuples, over the past few months and it has also sparked many of the then compare these with the source. prompt approaches and LLM augmentation methods we will later describe. • QA-Based Metrics: Assess the overlap between gen- Hallucination: In the realm of Large Language Models erated content and the source through a question- (LLMs), the phenomenon of ”hallucinations” has garnered answering framework (see [152]). significant attention. Defined in the literature, notably in the • NLI-Based Metrics: Use Natural Language Inference ”Survey of Hallucination in Natural Language Generation” datasets to evaluate the truthfulness of a generated paper [145], hallucination in an LLM is characterized as hypothesis based on a given premise (see [153]). ”the generation of content that is nonsensical or unfaithful • Faithfulness Classification Metrics: Offer a refined to the provided source.” This terminology, although rooted in assessment by creating task-specific datasets for a psychological parlance, has been appropriated within the field nuanced evaluation (see [154]). of artificial intelligence. Hallucinations in LLMs can be broadly categorized into Despite advances in automated metrics, human judgment two types: remains a vital piece. It typically involves two methodologies: Fig. 36: How LLMs Are Used and Augmented. 1) Scoring: Human evaluators rate the level of halluci- Maintaining and analyzing a tracking set of hallucina- nation within a predefined scale. tions is essential for ongoing model improvement. 2)
remains a vital piece. It typically involves two methodologies: Fig. 36: How LLMs Are Used and Augmented. 1) Scoring: Human evaluators rate the level of halluci- Maintaining and analyzing a tracking set of hallucina- nation within a predefined scale. tions is essential for ongoing model improvement. 2) Comparative Analysis: Evaluators compare gener- • Prompt Engineering and Metaprompt Design. Many ated content against baseline or ground-truth refer- of the advanced prompt techniques described in IV-B ences, adding an essential layer of subjective assess- such as Retrieval Augmented Generation directly ad- ment. dress hallucination risks. FactScore [155] is a recent example of a metric that can be • Model Selection and Configuration for Hallucination used both for human and model-based evaluation. The metric Mitigation. For exemple, larger models with lower breaks an LLM generation into “atomic facts”. The final score temperature settings usually perform better. Also, is computed as the sum of the accuracy of each atomic fact, techniques such as RLHF or domain-sepcific fine- givingeachofthemequalweight.Accuracyisabinarynumber tuning can mitigate hallucination risks. that simply states whether the atomic fact is supported by the source. The authors implement different automation strategies that use LLMs to estimate this metric. B. Using LLMs: Prompt Design and Engineering Finally,mitigating hallucinationsinLLMs isamultifaceted A prompt in generative AI models is the textual input challenge, requiring tailored strategies to suit various applica- provided by users to guide the model’s output. This could tions. Those include: range from simple questions to detailed descriptions or specific • Product Design and User Interaction Strategies such tasks. Prompts generally consist of instructions, questions, as use case design, structuring the input/output, or input data, and examples. In practice, to elicit a desired providing mechanisms for user feedback. response from an AI model, a prompt must contain either instructions or questions, with other elements being optional. • Data Management and Continuous Improvement. Advanced prompts involve more complex structures, such as”chain of thought” prompting, where the model is guided to such examples of step by step reasoning by hand is hard and follow a logical reasoning process to arrive at an answer. error prone. That is where automatic CoT [157] comes into Prompt engineering is a rapidly evolving discipline that play. shapes the interactions and outputs of LLMs and other gen- 2) Tree of Thought (ToT): The Tree of Thought (ToT) erative AI models. The essence of prompt engineering lies in [158] prompting technique is inspired by the concept of crafting the optimal prompt to achieve a specific goal with considering various alternative solutions or thought processes a generative model. This process is not only about instructing before convergin
ions and outputs of LLMs and other gen- 2) Tree of Thought (ToT): The Tree of Thought (ToT) erative AI models. The essence of prompt engineering lies in [158] prompting technique is inspired by the concept of crafting the optimal prompt to achieve a specific goal with considering various alternative solutions or thought processes a generative model. This process is not only about instructing before converging on the most plausible one. ToT is based the model but also involves some understanding of the model’s on the idea of branching out into multiple ”thought trees” capabilities and limitations, and the context within which it where each branch represents a different line of reasoning. operates. This method allows the LLM to explore various possibilities Prompt engineering transcends the mere construction of and hypotheses, much like human cognitive processes where prompts; it requires a blend of domain knowledge, understand- multiple scenarios are considered before determining the most ing of the AI model, and a methodical approach to tailor likely one. prompts for different contexts. This might involve creating A critical aspect of ToT is the evaluation of these reasoning templates that can be programmatically modified based on a paths. As the LLM generates different branches of thought, given dataset or context. For example, generating personalized each is assessed for its validity and relevance to the query. responses based on user data might use a template that is This process involves real-time analysis and comparison of dynamically filled with relevant user information. the branches, leading to a selection of the most coherent and Furthermore, prompt engineering is an iterative and ex- logical outcome. ploratory process, akin to traditional machine learning prac- ToT is particularly useful in complex problem-solving tices such as model evaluation or hyperparameter tuning. The scenarios where a single line of reasoning might not suffice. rapid growth of this field suggests its potential to revolutionize It allows LLMs to mimic a more human-like problem-solving certain aspects of machine learning, moving beyond traditional approach, considering a range of possibilities before arriving methods like feature or architecture engineering. On the other at a conclusion. This technique enhances the model’s ability hand, traditional engineering practices such as version con- to handle ambiguity, complexity, and nuanced tasks, making it trol and regression testing need to be adapted to this new a valuable tool in advanced AI applications. paradigm just like they were adapted to other machine learning approaches [156]. 3) Self-Consistency: Self-Consistency [159] utilizes an In the following paragraphs we detail some of the most ensemble-based method, where the LLM is prompted to gen- interesting and popular prompt engineering approaches. erate multiple responses to the same query. The consistency among these responses serves as an indicator of their accuracy 1) Chain of Thought (CoT): The Chain of Thought (CoT) and reliability. technique, initially described in the paper “Chain-of-Thought The Self-Consistency approach is grounded in the principle Prompting Elicits Reasoning in Large Language Models”[34]
te multiple responses to the same query. The consistency among these responses serves as an indicator of their accuracy 1) Chain of Thought (CoT): The Chain of Thought (CoT) and reliability. technique, initially described in the paper “Chain-of-Thought The Self-Consistency approach is grounded in the principle Prompting Elicits Reasoning in Large Language Models”[34] that if an LLM generates multiple, similar responses to the by Google researchers, represents a pivotal advancement in same prompt, it is more likely that the response is accurate. prompt engineering for Large Language Models (LLMs). This method involves asking the LLM to tackle a query mul- This approach hinges on the understanding that LLMs, while tiple times, each time analyzing the response for consistency. proficient in token prediction, are not inherently designed for This technique is especially useful in scenarios where factual explicit reasoning. CoT addresses this by guiding the model accuracy and precision are paramount. through essential reasoning steps. CoT is based on making the implicit reasoning process of The consistency of responses can be measured using vari- LLMs explicit. By outlining the steps required for reasoning, ous methods. One common approach is to analyze the overlap the model is directed closer to a logical and reasoned output, in the content of the responses. Other methods may include especially in scenarios demanding more than simple informa- comparing the semantic similarity of responses or employing tion retrieval or pattern recognition. more sophisticated techniques like BERT-scores or n-gram overlaps. These measures help in quantifying the level of CoT prompting manifests in two primary forms: agreement among the responses generated by the LLM. 1) Zero-Shot CoT: This form involves instructing the Self-Consistency has significant applications in fields LLM to “think step by step”, prompting it to de- where the veracity of information is critical. It is particularly construct the problem and articulate each stage of relevant in scenarios like fact-checking, where ensuring the reasoning. accuracy of information provided by AI models is essential. 2) Manual CoT: A more complex variant, it requires By employing this technique, prompt engineers can enhance providing step-by-step reasoning examples as tem- the trustworthiness of LLMs, making them more reliable for plates for the model. While yielding more effective tasks that require high levels of factual accuracy. results, it poses challenges in scalability and mainte- 4) Reflection: Reflection [160] involves prompting LLMs nance. to assess and potentially revise their own outputs based on Manual CoT is more effective than zero-shot. However, reasoning about the correctness and coherence of their re- the effectiveness of this example-based CoT depends on the sponses. The concept of Reflection centers on the ability of choice of diverse examples, and constructing prompts with LLMs to engage in a form of self-evaluation. After generatingan initial response, the model is prompted to reflect on its 8) Autom
oT is more effective than zero-shot. However, reasoning about the correctness and coherence of their re- the effectiveness of this example-based CoT depends on the sponses. The concept of Reflection centers on the ability of choice of diverse examples, and constructing prompts with LLMs to engage in a form of self-evaluation. After generatingan initial response, the model is prompted to reflect on its 8) Automatic Prompt Engineering (APE): Automatic own output, considering factors like factual accuracy, logical Prompt Engineering (APE) [163] focuses on automating the consistency, and relevance. This introspective process can lead process of prompt creation for Large Language Models to the generation of revised or improved responses. (LLMs). APE seeks to streamline and optimize the prompt A key aspect of Reflection is the LLM’s capacity for designprocess,leveragingthecapabilitiesofLLMsthemselves self-editing. By evaluating its initial response, the model can to generate and evaluate prompts. APE involves using LLMs identify potential errors or areas of improvement. This iterative in a self-referential manner where the model is employed process of generation, reflection, and revision enables the LLM to generate, score, and refine prompts. This recursive use of to refine its output, enhancing the overall quality and reliability LLMs enables the creation of high-quality prompts that are of its responses. more likely to elicit the desired response or outcome. 5) ExpertPrompting: ExpertPrompting[161]enhancesthe The methodology of APE can be broken down into several capabilities of Large Language Models (LLMs) by simulating key steps: the responses of experts in various fields. This method involves • Prompt Generation: The LLM generates a range of prompting the LLMs to assume the role of an expert and re- potential prompts based on a given task or objective. spond accordingly, providing high-quality, informed answers. A key strategy within Expert Prompting is the multi-expert • Prompt Scoring: Each generated prompt is then approach. The LLM is prompted to consider responses from evaluated for its effectiveness, often using criteria multiple expert perspectives, which are then synthesized to like clarity, specificity, and likelihood of eliciting the form a comprehensive and well-rounded answer. This tech- desired response. nique not only enhances the depth of the response but also • Refinement and Iteration: Based on these evalua- incorporates a range of viewpoints, reflecting a more holistic tions,promptscanberefinedanditeratedupon,further understanding of the subject matter. enhancing their quality and effectiveness. 6) Chains: Chains refer to the method of linking multiple components in a sequence to handle complex tasks with Large C. Augmenting LLMs through external knowledge - RAG Language Models (LLMs). This approach involves creating a series of interconnected steps or processes, each contributing One of the main limitations of pre-trained LLMs is their to the final outcome. The concept of Chains is based on lack of up-to-date knowledge or access to private or use- the idea of constructing a workflow where different stages case-specific information. This is where retrieval augmented or components are sequentially arranged. Each component in
olves creating a series of interconnected steps or processes, each contributing One of the main limitations of pre-trained LLMs is their to the final outcome. The concept of Chains is based on lack of up-to-date knowledge or access to private or use- the idea of constructing a workflow where different stages case-specific information. This is where retrieval augmented or components are sequentially arranged. Each component in generation (RAG) comes into the picture [164]. RAG, illus- a Chain performs a specific function, and the output of one trated in figure 37, involves extracting a query from the input serves as the input for the next. This end-to-end arrangement prompt and using that query to retrieve relevant information allows for more complex and nuanced processing, as each from an external knowledge source (e.g. a search engine or a stage can be tailored to handle a specific aspect of the task. knowledge graph, see figure 38 ). The relevant information is Chains can vary in complexity and structure, depending on then added to the original prompt and fed to the LLM in order the requirements. In “PromptChainer: Chaining Large Lan- for the model to generate the final response. A RAG system guage Model Prompts through Visual Programming” [162], includes three important components: Retrieval, Generation, the authors not only describe the main challenges in designing Augmentation [165]. chains, but also describe a visual tool to support those tasks. a) RAG-aware prompting techniques: Because of the 7) Rails: Rails in advanced prompt engineering refer to importance of RAG to build advanced LLM systems, several a method of guiding and controlling the output of Large RAG-aware prompting techniques have been developed re- Language Models (LLMs) through predefined rules or tem- cently.OnesuchtechniqueisForward-lookingActiveRetrieval plates. This approach is designed to ensure that the model’s Augmented Generation (FLARE) responses adhere to certain standards or criteria, enhancing the Forward-looking Active Retrieval Augmented Generation relevance, safety, and accuracy of the output. The concept of (FLARE) [168] enhances the capabilities of Large Language Rails involves setting up a framework or a set of guidelines Models (LLMs) by iteratively combining prediction and in- that the LLM must follow while generating responses. These formation retrieval. FLARE represents an evolution in the guidelines are typically defined using a modeling language or use of retrieval-augmented generation, aimed at improving the templates known as Canonical Forms, which standardize the accuracy and relevance of LLM responses. way natural language sentences are structured and delivered. Rails can be designed for various purposes, depending on FLARE involves an iterative process where the LLM the specific needs of the application: actively predicts upcoming content and uses these predictions as queries to retrieve relevant information. This method con- • Topical Rails: Ensure that the LLM sticks to a trastswithtraditionalretrieval-augmentedmodelsthattypically particular topic or domain. retrieve information once and then proceed with generation. In • Fact-Checking Rails: Aimed at minimizing the gen- FLARE, this process is dynamic and ongoing throughout the eration of false or
s queries to retrieve relevant information. This method con- • Topical Rails: Ensure that the LLM sticks to a trastswithtraditionalretrieval-augmentedmodelsthattypically particular topic or domain. retrieve information once and then proceed with generation. In • Fact-Checking Rails: Aimed at minimizing the gen- FLARE, this process is dynamic and ongoing throughout the eration of false or misleading information. generation phase. In FLARE, each sentence or segment gener- ated by the LLM is evaluated for confidence. If the confidence • Jailbreaking Rails: Prevent the LLM from generating level is below a certain threshold, the model uses the generated responses that attempt to bypass its own operational content as a query to retrieve relevant information, which is constraints or guidelines. then used to regenerate or refine the sentence. This iterative Fig. 37: An example of synthesizing RAG with LLMs for question answering application [166]. examples, the LLM decides to call an external Q&A tool, a calculator, and a Wikipedia Search Engine More recently, researchers at Berkeley have trained a new LLM called Gorilla [67] that beats GPT-4 at the use of APIs, a specific but quite general tool. a) Tool-aware prompting techniques: Similarly to what was described with RAG, several tool-aware prompting ap- proaches have been developed to make usage of tools more scalable.ApopulartechniqueisthesocalledAutomaticMulti- step Reasoning and Tool-use (ART). Fig. 38: This is one example of synthesizing the KG as a Automatic Multi-step Reasoning and Tool-use (ART) [170] retriever with LLMs [167]. is a prompt engineering technique that combines automated chain of thought prompting with the use of external tools. ART represents a convergence of multiple prompt engineering process ensures that each part of the response is informed by strategies, enhancing the ability of Large Language Models the most relevant and current information available. (LLMs) to handle complex tasks that require both reasoning and interaction with external data sources or tools. FormoredetailsonRAGframeworkanditsrelevantworks, ART involves a systematic approach where, given a task we refer the readers to this survey of retrieval augmented and input, the system first identifies similar tasks from a task generations [165]. library. These tasks are then used as examples in the prompt, D. Using External Tools
or tools. FormoredetailsonRAGframeworkanditsrelevantworks, ART involves a systematic approach where, given a task we refer the readers to this survey of retrieval augmented and input, the system first identifies similar tasks from a task generations [165]. library. These tasks are then used as examples in the prompt, D. Using External Tools guiding the LLM on how to approach and execute the current task. This method is particularly effective when tasks require a Retrieving information from an external knowledge source combination of internal reasoning and external data processing asdescribedaboveisonlyoneofthepotentialwaystoaugment or retrieval. an LLM. More generally, an LLM can access any number of external tools (e.g. an API to a service) to augment its E. LLM Agents functionality. In that regards, RAG can be seen as a specific The idea of AI agents has been well-explored in the history instance of the broader category of the so called ”tools”. of AI. An agent is typically an autonomous entity that can Tools in this context are external functions or services that perceive the environment using its sensors, make a judgment LLMs can utilize. These tools extend the range of tasks an based on the state it currently is, and accordingly act based on LLM can perform, from basic information retrieval to complex the actions that are available to it. interactions with external databases or APIs. In the context of LLMs, an agent refers to a system based In the paper ”Toolformer: Language Models Can Teach on a specialized instantiation of an (augmented) LLM that Themselves to Use Tools” [169], the authors go beyond simple is capable of performing specific tasks autonomously. These tool usage by training an LLM to decide what tool to use agents are designed to interact with users and environment to when, and even what parameters the API needs. Tools include make decisions based on the input and the intended goal of two different search engines, or a calculator. In the following the interaction. Agents are based on LLMs equipped with theability to access and use tools, and to make decisions based on or uncertain, allowing the LLM-based agent to maintain a high the given input. They are designed to handle tasks that require level of performance and reliability. a degree of autonomy and decision-making, typically beyond Reason and Act (ReAct)[176] prompts LLMs to generate simple response generation. not only verbal reasoning but also actionable steps, thus The functionalities of a generic LLM-based agent include: enhancing the model’s dynamic problem-solving capabilities. ReAct is grounded in the principle of integrating reasoning • Tool Access and Utilization: Agents have the capabil- with action. In this approach, the LLM is prompted to alternate ity to access external tools and services, and to utilize between generating reasoning traces (explanations) and taking these resources effectively to accomplish tasks. actions (steps or commands) in an interleaved manner. This • Decision Making: They can make decisions based on approachallowsthemodeltodynamicallyreasonaboutaprob- the input, context, and the tools available to them, lem, and pr
ernate ity to access external tools and services, and to utilize between generating reasoning traces (explanations) and taking these resources effectively to accomplish tasks. actions (steps or commands) in an interleaved manner. This • Decision Making: They can make decisions based on approachallowsthemodeltodynamicallyreasonaboutaprob- the input, context, and the tools available to them, lem, and propose and take concrete actions simultaneously. often employing complex reasoning processes. Dialog-Enabled Resolving Agents (DERA) [177] are spe- cialized AI agents that can engage in dialogue, resolve queries, As an example, an LLM that has access to a function (or and make decisions based on interactive exchanges. DERA an API) such as weather API, can answer any question related is developed based on the idea of utilizing multiple agents to the weather of the specific place. In other words, it can use within a dialog context, each with specific roles and functions. APIs to solve problems. Furthermore, if that LLM has access These agents can include Researchers, who gather and analyze to an API that allows to make purchases, a purchasing agent information, and Deciders, who make final judgments based can be built to not only have capabilities to read information on the information provided. This division of roles allows for from the external world, but also act on it [171]. a well-organized and efficient approach to problem-solving Fig. 40 shows another example of LLM-based agents for and decision-making. DERA is particularly advantageous in conversational information seeking [36], where an LLM is scenarios requiring complex decision-making and problem- augmented with a set of plug-and-play modules, including solving, such as those in medical diagnostics or customer ser- a working memory that tracks the dialog state, a policy that vice. The collaborative and interactive nature of DERA agents makes an execution plan for the task and selects next system allows them to handle intricate queries with a level of depth action, an action executor that performs an action selected by and nuance that single-agent systems might struggle with. the policy (consolidating evidence from external knowledge, Moreover, this approach aligns well with human decision- or prompting the LLM to generate responses), and a utility making processes, making AI reasoning more relatable and that accesses the alignment of the LLM’s responses with user trustworthy. expectations or specific business requirements, and generate V. POPULAR DATASETS FOR LLMS feedback to improve agent performance. FormoredetailsonLLM-basedAIagentsseerecentsurvey Large language models exhibit promising accomplish- [172], [173], [174]. ments, but the main question that arises is how effectively they function and how their performance can be assessed in a) Prompt engineering techniques for agents: Like specific tasks or applications. RAG and Tools, prompt engineering techniques that specif- The evaluation of LLMs poses particular challenges due ically address the needs of LLM-based agents have been to the evolving landscape of their applications. The original developed. Three such examples are Reasoning without Ob-
they function and how their performance can be assessed in a) Prompt engineering techniques for agents: Like specific tasks or applications. RAG and Tools, prompt engineering techniques that specif- The evaluation of LLMs poses particular challenges due ically address the needs of LLM-based agents have been to the evolving landscape of their applications. The original developed. Three such examples are Reasoning without Ob- intent behind developing LLMs was to boost the performance servation (ReWOO), Reason and Act (ReAct), and Dialog- of NLP tasks such as translation, summarization, question- Enabled Resolving Agents (DERA). answering, and so on [178]. However, it is evident today Reasoning without Observation (ReWOO) [175] aims to that these models are finding utility across diverse domains decouplereasoningfromdirectobservations.ReWOOoperates including code generation and finance. Moreover, the eval- byenablingLLMstoformulatecomprehensivereasoningplans uation of LLMs encompasses several critical considerations or meta-plans without immediate reliance on external data such as fairness and bias, fact-checking, and reasoning. In or tools. This approach allows the agent to create a struc- this section, we outline the commonly used benchmarks for tured framework for reasoning that can be executed once the assessing LLMs. These benchmarks are categorized based on necessary data or observations are available. In ReWOO, the training or evaluating the LLM Capabilities. LLM initially develops a plan (a series of steps) that outlines A. Datasets for Basic Tasks: language model- how to approach and solve a given problem. This meta- ing/understanding/generation planning phase is crucial as it sets the stage for the agent to process information once it becomes available. The execution This section provides an overview of the benchmarks and phase then involves integrating actual data or observations into datasets suited to evaluate the basic abilities of LLMs. the pre-specified plan, leading to coherent and contextually relevant responses. ReWOO offers significant advantages in • Natural Questions [179] is a QA dataset that consists terms of token efficiency and robustness to tool failure. It of real anonymized, aggregated queries submitted to enables LLMs to handle tasks where immediate access to the Google search engine as questions. An annotator external data is not available, relying instead on a well- is presented with a question along with a Wikipedia structured reasoning framework. This method is particularly page from the top 5 search results, and annotates a advantageous in scenarios where data retrieval is costly, slow, longanswer(typicallyaparagraph)andashortanswer Fig. 39: HuggingGPT: An agent-based approach to use tools and planning [image courtesy of [171]] task description, a code solution, and three automated test cases. • HumanEval [182] is a dataset for code generation task. This dataset consists of 164 hand-crafted pro-
iption, a code solution, and three automated test cases. • HumanEval [182] is a dataset for code generation task. This dataset consists of 164 hand-crafted pro- gramming challenges. Each challenge is accompanied byafunctionsignature,docstring,codebody,andmul- tiple unit tests. The main intuition behind developing thisdatasetistoguaranteetheexclusionofitscontents from training datasets for code generation models. • APPS [183] is designed for code generation task focusing on the Python programming language. The APPS dataset contains a collection of232 ,444 Python programs. Each program in the dataset has an average Fig. 40: A LLM-based agent for conversational information of 18 lines of Python code. Additionally, APPS offers seeking. Courtesy of [36]. access to a repository of 10 ,000 unique programming exercises, each with text-based problem descriptions. The final aspect to highlight is that the it includes test cases. (one or more entities) if present on the page, or marks • WikiSQL[184]iscraftedforcodegenerationtaskand null if no long/short answer is present. it has 87,726 carefully labeled pairs of SQL queries • MMLU [180] is intended to evaluate the knowl- and corresponding natural language questions from edge gained in zero-shot and few-shot scenarios. That Wikipedia tables. The SQL queries comprise three means that MMLU assesses both the general knowl- subsets: test sets (17 ,284 examples), development edge and problem-solving ability of a model. It covers (9,145 examples), and training (61 ,297 examples). 57 subjects in STEM, humanities, social sciences, • TriviaQA [185] is designed for QA task. This and other areas. The benchmark varies in complexity, dataset comprises more than 650 ,000 question- ranging from elementary to advanced professional. answer-evidence triples. There are 95 ,000 question- It is worth mentioning that the main contribution of answerpairsinthisdataset,eachauthoredbytriviaen- this dataset is for multi-task language understanding, thusiasts and supported by an average of six indepen- question answering, and arithmetic reasoning.
anging from elementary to advanced professional. answer-evidence triples. There are 95 ,000 question- It is worth mentioning that the main contribution of answerpairsinthisdataset,eachauthoredbytriviaen- this dataset is for multi-task language understanding, thusiasts and supported by an average of six indepen- question answering, and arithmetic reasoning. dently sourced evidence documents. These documents • MBPP [181] stands for “Mostly Basic Python Prob- are automatically acquired from Wikipedia or broader lems” and provides a benchmark for evaluating the web search results. The dataset is categorized into performance of models designed for code generation. two segments, including those with authentic answers The benchmark encompasses 974 short Python pro- from Wikipedia and web domains, and verified sets grams including a wide range of topics, including embody the accurately answered questions along with fundamental programming concepts and standard li- their associated documents from both Wikipedia and brary usage, and more. Each challenge comprises a online. Fig. 41: Dataset applications. • RACE [186] suits for reading comprehension task. is the synthesis of RACE-M and RACE-H. This dataset is based on English tests completed by Chinese students from middle school and high school, • SQuAD [187] stands for “Stanford Question Answer- aged 12 to 18 , and it contains roughly 28 ,000 texts ing Dataset” and is a crowdsourced reading compre- and 100 ,000 questions rigorously prepared by human hension dataset based on Wikipedia articles. It has specialists, primarily English instructors. This dataset approximately 100 ,000 question-answer pairs con- contains a wide range of subjects that were purpose- nected to more than 500 articles. The answers to fully chosen to assess students’ comprehension and these questions are typically text fragments or spans reasoning abilities. This dataset is available in three taken from the corresponding reading passages. The subgroups: RACE-M, RACE-H, and RACE. RACE- questions may be unanswerable in some cases. The M refers to the middle school examinations, whereas dataset is divided into three sets: an 80% training set, RACE-H denotes the high school tests. Finally, RACE a 10% development set, and a 10% hidden test set. Fig. 42: Datasets licensed under different licenses. • BoolQ [188] is a yes/no question-answering dataset • GSM8K [190] is designed to evaluate the model’s where the goal is reading comprehension task. BoolQ abilityformulti-stepmathematicalreasoning.GSM8K includes 15 ,942 examples. Each example is a triplet includes 8.5K linguistically diverse grade school math that includes a question, a relevant paragraph, and word problems written by humans. The dataset is split the solution. Although the main intuition behind into two sets: a training set with 7.5K problems, this dataset is for reading comprehension
lreasoning.GSM8K includes 15 ,942 examples. Each example is a triplet includes 8.5K linguistically diverse grade school math that includes a question, a relevant paragraph, and word problems written by humans. The dataset is split the solution. Although the main intuition behind into two sets: a training set with 7.5K problems, this dataset is for reading comprehension, it can be and a test set with 1K problems. These problems used for reasoning, natural language inference, and need 2 to 8 steps to be solved. Solutions mainly question-answering tasks. are a series of elementary calculations using basic • MultiRC [189] is another dataset that fits reading arithmetic operations. comprehension task. MultiRC contains brief para- • MATH [191] enables to assess how well models can graphs as well as multi-sentence questions that can solve math problems. MATH dataset hast 12 , 500 be answered using the information in the paragraph. problems from high school math competitions. Each The paragraphs in this dataset come from a variety problem in the dataset has a step-by-step solution and of sources, including news, fiction, historical texts, a final answer enclosed in a box. The problems cover Wikipedia articles, discussions on society and law, a wide range of topics and have different levels of elementary school science textbooks, and 9/11 re- complexity. There are seven subjects in total. Further- ports. Each question has many response choices, with more, the difficulty of each problem is rated based one or more of them being correct. Answering the on the AoPS standards on a scale from ′1′ to ′5′. A questions requires reasoning across several sentences. ′1′ shows the easiest problems in a subject, while ′5′ MultiRC dataset encompasses around 6,000 multi- represents the most difficult. In terms of formatting, sentencequestionsgatheredfromover800paragraphs. all problems and solutions are presented using LATEX On average, each question offers about two valid and the Asymptote vector graphics language. answer alternatives out of a total of five. • HellaSwag [192] is designed to assess commonsense reasoning in LLMs. This benchmark includes 70 ,000 B. Datasets for Emergent: ICL, reasoning (CoT), instruction multiple-choice questions. Each question is derived following from one of two domains: ActivityNet or WikiHow, and presents four answer choices regarding what This section centers on the benchmarks and datasets em- might happen in the following situation. The correct ployed to evaluate the emergent abilities of LLMs. answer provides an actual statement describing the upcoming event, but the three wrong answers are C. Datasets for Augmented: using external knowledge/tools created to confuse machines.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
3
Edit dataset card