text
stringlengths
398
4.1k
Large Language Models: A Survey Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu Richard Socher, Xavier Amatriain, Jianfeng Gao Abstract—Large Language Models (LLMs) have drawn a that have different starting points and velocity: statistical lan- lot of attention due to their strong performance on a wide guage models, neural language models, pre-trained language range of natural language tasks, since the release of ChatGPT models and LLMs. in November 2022. LLMs’ ability of general-purpose language understanding and generation is acquired by training billions of Statisticallanguagemodels(SLMs)viewtextasasequence model’s parameters on massive amounts of text data, as predicted of words, and estimate the probability of text as the product by scaling laws [1], [2]. The research area of LLMs, while very of their word probabilities. The dominating form of SLMs recent, is evolving rapidly in many different ways. In this paper, are Markov chain models known as the n-gram models, we review some of the most prominent LLMs, including three which compute the probability of a word conditioned on its popular LLM families (GPT, LLaMA, PaLM), and discuss their immediate proceeding n − 1 words. Since word probabilities characteristics, contributions and limitations. We also give an are estimated using word and n-gram counts collected from overview of techniques developed to build, and augment LLMs. text corpora, the model needs to deal with data sparsity (i.e., We then survey popular datasets prepared for LLM training, assigning zero probabilities to unseen words or n-grams) by fine-tuning, and evaluation, review widely used LLM evaluation metrics, and compare the performance of several popular LLMs using smoothing, where some probability mass of the model on a set of representative benchmarks. Finally, we conclude is reserved for unseen n-grams [12]. N-gram models are the paper by discussing open challenges and future research widely used in many NLP systems. However, these models directions. are incomplete in that they cannot fully capture the diversity and variability of natural language due to data sparsity. I. INTRODUCTION Early neural language models (NLMs) [13], [14], [15], [16] Language modeling is a long-standing research topic, dat- deal with data sparsity by mapping words to low-dimensional ing back to the 1950s with Shannon’s application of informa- continuous vectors (embedding vectors) and predict the next tion theory to human language, where he measured how well word based on the aggregation of the embedding vectors of simple n-gram language models predict or compress natural its proceeding words using neural networks. The embedding language text [3]. Since then, statistical language modeling vectors learned by NLMs define a hidden space where the became fundamental to many natural language understanding semantic similarity between vectors can be readily computed and generation tasks, ranging from speech recognition, ma- as their distance. This opens the door to computing semantic chine translation, to information retrieval [4], [5], [6]. similarityofanytwoinputsregardlesstheirforms(e.g.,queries The recent advances on transformer-based large language vs. documents in Web search [17], [18], sentences in
age understanding semantic similarity between vectors can be readily computed and generation tasks, ranging from speech recognition, ma- as their distance. This opens the door to computing semantic chine translation, to information retrieval [4], [5], [6]. similarityofanytwoinputsregardlesstheirforms(e.g.,queries The recent advances on transformer-based large language vs. documents in Web search [17], [18], sentences in different models (LLMs), pretrained on Web-scale text corpora, signif- languagesinmachinetranslation[19],[20])ormodalities(e.g., icantly extended the capabilities of language models (LLMs). imageandtextinimagecaptioning[21],[22]).EarlyNLMsare For example, OpenAI’s ChatGPT and GPT-4 can be used not task-specific models, in that they are trained on task-specific only for natural language processing, but also as general task data and their learned hidden space is task-specific. solvers to power Microsoft’s Co-Pilot systems, for instance, Pre-trained language models (PLMs), unlike early NLMs, can follow human instructions of complex new tasks per- are task-agnostic. This generality also extends to the learned forming multi-step reasoning when needed. LLMs are thus hidden embedding space. The training and inference of PLMs becoming the basic building block for the development of follows the pre-training and fine-tuning paradigm, where lan- general-purpose AI agents or artificial general intelligence guage models with recurrent neural networks [23] or trans- (AGI). formers [24], [25], [26] are pre-trained on Web-scale unlabeled As the field of LLMs is moving fast, with new findings, textcorporaforgeneraltaskssuchaswordprediction,andthen models and techniques being published in a matter of months finetuned to specific tasks using small amounts of (labeled) or weeks [7], [8], [9], [10], [11], AI researchers and practi- task-specific data. Recent surveys on PLMs include [8], [27], tioners often find it challenging to figure out the best recipes [28]. to build LLM-powered AI systems for their tasks. This paper Large language models (LLMs) mainly refer to gives a timely survey of the recent advances on LLMs. We transformer-based neural language models 1 that contain hope this survey will prove a valuable and accessible resource tens to hundreds of billions of parameters, which are pre- for students, researchers and developers. trained on massive text data, such as PaLM [31], LLaMA LLMsarelarge-scale,pre-trained,statisticallanguagemod- [32], and GPT-4 [33], as summarized in Table III. Compared els based on neural networks. The recent success of LLMs is 1Recently, several very promising non-transformer LLMs have been pro- an accumulation of decades of research and development of posed, such as the LLMs based on structured state space models [29], [30]. language models, which can be categorized into four waves See Section VII for more details. Fig. 1: LLM Capabilities. to PLMs, LLMs are not only much larger in model size, but LLMs are used, and augmented for real-world applications also exhibit stronger language understanding and generation Sections V and VI review popular datasets and benchmarks for abilities, and more importantly, emergent abilities that are evaluating LLMs, and summarize the reported LLM evaluation not present in smaller-s
Fig. 1: LLM Capabilities. to PLMs, LLMs are not only much larger in model size, but LLMs are used, and augmented for real-world applications also exhibit stronger language understanding and generation Sections V and VI review popular datasets and benchmarks for abilities, and more importantly, emergent abilities that are evaluating LLMs, and summarize the reported LLM evaluation not present in smaller-scale language models. As illustrated results. Finally, Section VII concludes the paper by summa- in Fig. 1, these emergent abilities include (1) in-context rizing the challenges and future research directions. learning, where LLMs learn a new task from a small set of examples presented in the prompt at inference time, (2) II. LARGE LANGUAGE MODELS instruction following, where LLMs, after instruction tuning, can follow the instructions for new types of tasks without In this section we start with a review of early pre-trained using explicit examples, and (3) multi-step reasoning, where neural language models as they are the base of LLMs, and LLMs can solve a complex task by breaking down that task then focus our discussion on three families of LLMs: GPT, into intermediate reasoning steps as demonstrated in the LlaMA, and PaLM. Table I provides an overview of some of chain-of-thought prompt [34]. LLMs can also be augmented these models and their characteristics. by using external knowledge and tools [35], [36] so that they can effectively interact with users and environment [37], A. Early Pre-trained Neural Language Models and continually improve itself using feedback data collected through interactions (e.g. via reinforcement learning with Language modeling using neural networks was pioneered human feedback (RLHF)). by [38], [39], [40]. Bengio et al. [13] developed one of the first Through advanced usage and augmentation techniques, neurallanguagemodels(NLMs)thatarecomparableton-gram LLMs can be deployed as so-called AI agents: artificial entities models. Then, [14] successfully applied NLMs to machine that sense their environment, make decisions, and take actions. translation. The release of RNNLM (an open source NLM Previousresearchhasfocusedondevelopingagentsforspecific toolkit) by Mikolov [41], [42] helped significantly popularize tasks and domains. The emergent abilities demonstrated by NLMs. Afterwards, NLMs based on recurrent neural networks LLMs make it possible to build general-purpose AI agents (RNNs) and their variants, such as long short-term memory based on LLMs. While LLMs are trained to produce responses (LSTM)[19]andgatedrecurrentunit(GRU)[20],werewidely in static settings, AI agents need to take actions to interact with usedformanynaturallanguageapplicationsincludingmachine dynamic environment. Therefore, LLM-based agents often translation, text generation and text classification [43]. need to augment LLMs to e.g., obtain updated information Then, the invention of the Transformer architecture [44] from external knowledge bases, verify whether a system action marks another milestone in the development of NLMs. By produces the expected result, and cope with when things do applying self-attention to compute in parallel for every word not go as expected, etc. We will discuss in detail LLM-based in a sentence or document an “attention score” to model the agents in Section IV. influence each wor
from external knowledge bases, verify whether a system action marks another milestone in the development of NLMs. By produces the expected result, and cope with when things do applying self-attention to compute in parallel for every word not go as expected, etc. We will discuss in detail LLM-based in a sentence or document an “attention score” to model the agents in Section IV. influence each word has on another, Transformers allow for In the rest of this paper, Section II presents an overview of much more parallelization than RNNs, which makes it possible stateoftheartofLLMs,focusingonthreeLLMfamilies(GPT, to efficiently pre-train very big language models on large LLaMA and PaLM) and other representative models. Section amounts of data on GPUs. These pre-trained language models III discusses how LLMs are built. Section IV discusses how (PLMs) can be fine-tuned for many downstream tasks. Fig. 2: The paper structure. WegroupearlypopularTransformer-basedPLMs,basedon BERT (Birectional Encoder Representations from Trans- their neural architectures, into three main categories: encoder- formers) [24] is one of the most widely used encoder-only only, decoder-only, and encoder-decoder models. Comprehen- language models. BERT consists of three modules: (1) an sive surveys of early PLMs are provided in [43], [28]. embedding module that converts input text into a sequence of embedding vectors, (2) a stack of Transformer encoders 1) Encoder-onlyPLMs: Asthenamesuggests,theencoder- that converts embedding vectors into contextual representation only models only consist of an encoder network. These models vectors, and (3) a fully connected layer that converts the are originally developed for language understanding tasks, representation vectors (at the final layer) to one-hot vectors. such as text classification, where the models need to predict a BERT is pre-trained uses two objectives: masked language class label for an input text. Representative encoder-only mod- modeling(MLM)andnextsentenceprediction.Thepre-trained els include BERT and its variants, e.g., RoBERTa, ALBERT, BERT model can be fine-tuned by adding a classifier layer DeBERTa, XLM, XLNet, UNILM, as to be described below. for many language understanding tasks, ranging from text TABLE I: High-level Overview of Popular Language Models Type Model Name #Parameters Release Base Models Open #Tokens Training dataset Source BERT 110M, 340M 2018 - ✓ 137B BooksCorpus, English Wikipedia RoBERTa 355M 2019 - ✓ 2.2T BooksCorpus, English Wikipedia, CC-NEWS, STORIES (a subset of Common Crawl), Reddit Encoder-Only ALBERT 12M, 18M, 60M, 2019 -
355M 2019 - ✓ 2.2T BooksCorpus, English Wikipedia, CC-NEWS, STORIES (a subset of Common Crawl), Reddit Encoder-Only ALBERT 12M, 18M, 60M, 2019 - ✓ 137B BooksCorpus, English Wikipedia 235M DeBERTa - 2020 - ✓ - BooksCorpus,EnglishWikipedia,STORIES,Red- dit content XLNet 110M, 340M 2019 - ✓ 32.89B BooksCorpus, English Wikipedia, Giga5, Com- mon Crawl, ClueWeb 2012-B Decoder-only GPT-1 120M 2018 - ✓ 1.3B BooksCorpusGPT-2 1.5B 2019 - ✓ 10B Reddit outbound T5 (Base) 223M 2019 - ✓ 156B Common Crawl Encoder-Decoder MT5 (Base) 300M 2020 - ✓ - New Common Crawl-based dataset in 101 lan-guages (m Common Crawl) BART (Base) 139M 2019 - ✓ - Corrupting text GPT-3 125M, 350M, 2020 × 300B Common Crawl (filtered), WebText2, Books1, 760M, 1.3B, 2.7B, Books2, Wikipedia 6.7B, 13B, 175B GPT Family CODEX 12B 2021 GPT ✓ - Public GitHub software repositoriesWebGPT 760M, 13B, 175B 2021 GPT-3 × - ELI5 GPT-4 1.76T 2023 - × 13T - LLaMA1 7B, 13B, 33B, 65B 2023 - ✓ 1T, 1.4T Online sources LLaMA2 7B, 13B, 34B, 70B 2023 - ✓ 2T Online sources Alpaca 7B 2023 LLaMA1 ✓ - GPT-3.5 Vicuna-13B 13B 2023
1T, 1.4T Online sources LLaMA2 7B, 13B, 34B, 70B 2023 - ✓ 2T Online sources Alpaca 7B 2023 LLaMA1 ✓ - GPT-3.5 Vicuna-13B 13B 2023 LLaMA1 ✓ - GPT-3.5 LLaMA Family Koala 13B 2023 LLaMA ✓ - Dialogue dataMistral-7B 7.3B 2023 ✓ - - Code Llama 34 2023 LLaMA2 ✓ 500B Publicly available code LongLLaMA 3B, 7B 2023 OpenLLaMA ✓ 1T - LLaMA-Pro-8B 8.3B 2024 LLaMA2-7B ✓ 80B Code and math corpora TinyLlama-1.1B 1.1B 2024 LLaMA1.1B ✓ 3T SlimPajama, Starcoderdata PaLM 8B, 62B, 540B 2022 - × 780B Web documents, books, Wikipedia, conversations, GitHub code U-PaLM 8B, 62B, 540B 2022 - × 1.3B Web documents, books, Wikipedia, conversations, GitHub code PaLM Family PaLM-2 340B 2023 - ✓ 3.6T Web documents, books, code, mathematics, con-versational data Med-PaLM 540B 2022 PaLM × 780B HealthSearchQA, MedicationQA, LiveQA Med-PaLM 2 - 2023 PaLM 2 × - MedQA, MedMCQA, HealthSearchQA, LiveQA, MedicationQA FLAN 137B 2021 LaMDA-PT ✓ - Web documents, code, dialog data, Wikipedia Gopher 280B 2021 - × 300B MassiveText ERNIE 4.0 10B 2023 - × 4TB Chinese text Retro 7.5B 2021 - × 600B MassiveText
2021 - × 300B MassiveText ERNIE 4.0 10B 2023 - × 4TB Chinese text Retro 7.5B 2021 - × 600B MassiveText LaMDA 137B 2022 - × 168B public dialog data and web documents ChinChilla 70B 2022 - × 1.4T MassiveText Galactia-120B 120B 2022 - 450B Other Popular LLMs CodeGen 16.1B 2022 - ✓ - THE PILE, BIGQUERY, BIGPYTHONBLOOM 176B 2022 - ✓ 366B ROOTS Zephyr 7.24B 2023 Mistral-7B ✓ 800B Synthetic data Grok-0 33B 2023 - × - Online source ORCA-2 13B 2023 LLaMA2 - 2001B - StartCoder 15.5B 2023 - ✓ 35B GitHub MPT 7B 2023 - ✓ 1T RedPajama, m Common Crawl, S2ORC, Common Crawl Mixtral-8x7B 46.7B 2023 - ✓ - Instruction dataset Falcon 180B 180B 2023 - ✓ 3.5T RefinedWeb Gemini 1.8B, 3.25B 2023 ✓ - Web documents, books, and code, image data, audio data, video data DeepSeek-Coder 1.3B, 6.7B, 33B 2024 - ✓ 2T GitHub’s Markdown and StackExchange DocLLM 1B,7B 2024 - × 2T IIT-CDIP Test Collection 1.0, DocBank classification, question answering to language inference. A larger mini-batches and learning rates. ALBERT [45] uses two high-level overview of BERT framework is shown in Fig 3. As parameter-reduction techniques to lower memory consum
B,7B 2024 - × 2T IIT-CDIP Test Collection 1.0, DocBank classification, question answering to language inference. A larger mini-batches and learning rates. ALBERT [45] uses two high-level overview of BERT framework is shown in Fig 3. As parameter-reduction techniques to lower memory consumption BERT significantly improved state of the art on a wide range and increase the training speed of BERT: (1) splitting the of language understanding tasks when it was published, the AI embedding matrix into two smaller matrices, and (2) using community was inspired to develop many similar encoder-only repeating layers split among groups. DeBERTa (Decoding- language models based on BERT. enhanced BERT with disentangled attention) [26] improves the RoBERTa [25] significantly improves the robustness of BERT and RoBERTa models using two novel techniques. The BERT using a set of model design choices and training strate- first is the disentangled attention mechanism, where each word gies, such as modifying a few key hyperparameters, removing is represented using two vectors that encode its content and thenext-sentencepre-trainingobjectiveandtrainingwithmuch position, respectively, and the attention weights among wordsFig. 3: Overall pre-training and fine-tuning procedures for BERT. Courtesy of [24] Fig. 5: Cross-lingual language model pretraining. The MLM arecomputedusingdisentangledmatricesontheircontentsand objective is similar to BERT, but with continuous streams relative positions, respectively. Second, an enhanced mask de- of text as opposed to sentence pairs. The TLM objective coder is used to incorporate absolute positions in the decoding extends MLM to pairs of parallel sentences. To predict a layer to predict the masked tokens in model pre-training. In masked English word, the model can attend to both the English addition, a novel virtual adversarial training method is used for sentence and its French translation, and is encouraged to align fine-tuning to improve models’ generalization. ELECTRA [46] English and French representations. Courtesy of [47]. usesanewpre-trainingtask,knownasreplacedtokendetection (RTD),whichisempiricallyproventobemoresample-efficient than MLM. Instead of masking the input, RTD corrupts it by all permutations of the factorization order. UNILM (UNIfied replacingsometokenswithplausiblealternativessampledfrom pre-trained Language Model) [49] is pre-trained using three a small generator network. Then, instead of training a model types of language modeling tasks: unidirectional, bidirectional, that predicts the original identities of the corrupted tokens, a and sequence-to-sequence prediction. This is achieved by discriminative model is trained to predict whether a token in employing a shared Transformer network and utilizing specific the corrupted input was replaced by a generated sample or not. self-attention masks to control what context the prediction is RTD is more sample-efficient tha
uage modeling tasks: unidirectional, bidirectional, that predicts the original identities of the corrupted tokens, a and sequence-to-sequence prediction. This is achieved by discriminative model is trained to predict whether a token in employing a shared Transformer network and utilizing specific the corrupted input was replaced by a generated sample or not. self-attention masks to control what context the prediction is RTD is more sample-efficient than MLM because the former conditioned on, as illustrated in Fig 6. The pre-trained model is defined over all input tokens rather than just the small subset can be fine-tuned for both natural language understanding and being masked out, as illustrated in Fig 4. generation tasks. Fig. 4: A comparison between replaced token detection and masked language modeling. Courtesy of [46]. Fig. 6: Overview of unified LM pre-training. The model XLMs [47] extended BERT to cross-lingual language parameters are shared across the LM objectives (i.e., bidirec- models using two methods: (1) a unsupervised method that tional LM, unidirectional LM, and sequence-to-sequence LM). only relies on monolingual data, and (2) a supervised method Courtesy of [49]. that leverages parallel data with a new cross-lingual language model objective, as illustrated in Fig 5. XLMs had obtained state-of-the-art results on cross-lingual classification, unsuper- 2) Decoder-only PLMs: Two of the most widely used visedandsupervisedmachinetranslation,atthetimetheywere decoder-only PLMs are GPT-1 and GPT-2, developed by proposed. OpenAI. These models lay the foundation to more powerful There are also encoder-only language models that leverage LLMs subsequently, i.e., GPT-3 and GPT-4. the advantages of auto-regressive (decoder) models for model GPT-1 [50] demonstrates for the first time that good training and inference. Two examples are XLNet and UNILM. performanceoverawiderangeofnaturallanguagetaskscanbe XLNet [48] is based on Transformer-XL, pre-trained using a obtained by Generative Pre-Training (GPT) of a decoder-only generalized autoregressive method that enables learning bidi- Transformer model on a diverse corpus of unlabeled text in a rectional contexts by maximizing the expected likelihood over self-supervised learning fashion (i.e., next word/token predic-tion), followed by discriminative fine-tuning on each specific B. Large Language Model Families downstream task (with much fewer samples), as illustrated in Large language models (LLMs) mainly refer to Fig 7. GPT-1 paves the way for subsequent GPT models, with transformer-based PLMs that contain tens to hundreds each version improving upon the architecture and achieving of billions of parameters. Compared to PLMs reviewed above, better performance on various language tasks. LLMs are not only much larger in model size, but also exhibit stronger language understanding and generation and emergent abilities that are not present in smaller-scale models. In what follows, we review three LLM families: GPT, LLaMA, and PaLM, as illustrated in Fig 8.
stronger language understanding and generation and emergent abilities that are not present in smaller-scale models. In what follows, we review three LLM families: GPT, LLaMA, and PaLM, as illustrated in Fig 8. 1) The GPT Family: Generative Pre-trained Transform- ers (GPT) are a family of decoder-only Transformer-based language models, developed by OpenAI. This family con- sists of GPT-1, GPT-2, GPT-3, InstrucGPT, ChatGPT, GPT-4, CODEX, and WebGPT. Although early GPT models, such as GPT-1 and GPT-2, are open-source, recent models, such as Fig.7:High-leveloverviewofGPTpretraining,andfine-tuning GPT-3 and GPT-4, are close-source and can only be accessed steps. Courtesy of OpenAI. via APIs. GPT-1 and GPT-2 models have been discussed in the early PLM subsection. We start with GPT-3 below. GPT-3 [56] is a pre-trained autoregressive language model with 175 billion parameters. GPT-3 is widely considered as GPT-2 [51] shows that language models are able to learn the first LLM in that it not only is much larger than previous to perform specific natural language tasks without any explicit PLMs, but also for the first time demonstrates emergent supervisionwhentrainedonalargeWebTextdatasetconsisting abilities that are not observed in previous smaller PLMs. GPT- of millions of webpages. The GPT-2 model follows the model 3 shows the emergent ability of in-context learning, which designs of GPT-1 with a few modifications: Layer normal- means GPT-3 can be applied to any downstream tasks without ization is moved to the input of each sub-block, additional any gradient updates or fine-tuning, with tasks and few-shot layer normalization is added after the final self-attention block, demonstrations specified purely via text interaction with the initialization is modified to account for the accumulation on model. GPT-3 achieved strong performance on many NLP the residual path and scaling the weights of residual layers, tasks, including translation, question-answering, and the cloze vocabulary size is expanded to 50,25, and context size is tasks, as well as several ones that require on-the-fly reasoning increased from 512 to 1024 tokens. or domain adaptation, such as unscrambling words, using a novel word in a sentence, 3-digit arithmetic. Fig 9 plots the 3) Encoder-DecoderPLMs: In[52],Raffleetal.showsthat performanceofGPT-3asafunctionofthenumberofexamples almost all NLP tasks can be cast as a sequence-to-sequence in in-context prompts. generation task. Thus, an encoder-decoder language model, by CODEX [57], released by OpenAI in March 2023, is a design, is a unified model in that it can per
novel word in a sentence, 3-digit arithmetic. Fig 9 plots the 3) Encoder-DecoderPLMs: In[52],Raffleetal.showsthat performanceofGPT-3asafunctionofthenumberofexamples almost all NLP tasks can be cast as a sequence-to-sequence in in-context prompts. generation task. Thus, an encoder-decoder language model, by CODEX [57], released by OpenAI in March 2023, is a design, is a unified model in that it can perform all natural general-purpose programming model that can parse natural language understanding and generation tasks. Representative language and generate code in response. CODEX is a de- encoder-decoder PLMs we will review below are T5, mT5, scendant of GPT-3, fine-tuned for programming applications MASS, and BART. on code corpora collected from GitHub. CODEX powers Microsoft’s GitHub Copilot. T5 [52] is a Text-to-Text Transfer Transformer (T5) model, WebGPT[58]isanotherdescendantofGPT-3,fine-tunedto where transfer learning is effectively exploited for NLP via an answer open-ended questions using a text-based web browser, introduction of a unified framework in which all NLP tasks are facilitating users to search and navigate the web. Specifically, castasatext-to-textgenerationtask.mT5[53]isamultilingual WebGPT is trained in three steps. The first is for WebGPT variant of T5, which is pre-trained on a new Common Crawl- to learn to mimic human browsing behaviors using human based dataset consisting of texts in 101 languages. demonstration data. Then, a reward function is learned to MASS (MAsked Sequence to Sequence pre-training) [54] predict human preferences. Finally, WebGPT is refined to adopts the encoder-decoder framework to reconstruct a sen- optimize the reward function via reinforcement learning and tence fragment given the remaining part of the sentence. The rejection sampling. encoder takes a sentence with randomly masked fragment To enable LLMs to follow expected human instructions, (several consecutive tokens) as input, and the decoder predicts InstructGPT [59] is proposed to align language models with the masked fragment. In this way, MASS jointly trains the user intent on a wide range of tasks by fine-tuning with encoder and decoder for language embedding and generation, human feedback. Starting with a set of labeler-written prompts respectively. and prompts submitted through the OpenAI API, a dataset of labeler demonstrations of the desired model behavior is BART [55] uses a standard sequence-to-sequence transla- collected. Then GPT-3 is fine-tuned on this dataset. Then, a tionmodelarchitecture.Itispre-trainedbycorruptingtextwith dataset of human-ranked model outputs is collected to further an arbitrary noising function, and then learning to reconstruct fine-tune the model using reinforcement learning. The method the original text. is known Reinforcement Learning from Human Feedback Fig. 8: Popular LLM Families. launch of ChatGPT (Chat Generative Pre-trained Transformer) [60] on November 30, 2022. C
sing reinforcement learning. The method the original text. is known Reinforcement Learning from Human Feedback Fig. 8: Popular LLM Families. launch of ChatGPT (Chat Generative Pre-trained Transformer) [60] on November 30, 2022. ChatGPT is chatbot that enables users to steer a conversation to complete a wide range of tasks such as question answering, information seeking, text summarization, and more. ChatGPT is powered by GPT-3.5 (and later by GPT-4), a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. GPT-4 [33] is the latest and most powerful LLM in the GPT family. Launched in March, 2023, GPT-4 is a multi- modal LLM in that it can take image and text as inputs and Fig. 9: GPT-3 shows that larger models make increasingly produce text outputs. While still less capable than humans efficient use of in-context information. It shows in-context in some of the most challenging real-world scenarios, GPT-4 learning performance on a simple task requiring the model to exhibits human-level performance on various professional and remove random symbols from a word, both with and without academic benchmarks, including passing a simulated bar exam a natural language task description. Courtesy of [56]. with a score around the top 10% of test takers, as shown in Fig 11. Like early GPT models, GPT-4 was first pre-trained to predict next tokens on large text corpora, and then fine-tuned with RLHF to align model behaviors with human-desired ones. (RLHF), as shown in 10. The resultant InstructGPT models have shown improvements in truthfulness and reductions in 2) The LLaMA Family: LLaMA is a collection of founda- toxic output generation while having minimal performance tion language models, released by Meta. Unlike GPT models, regressions on public NLP datasets. LLaMA models are open-source, i.e., model weights are released to the research community under a noncommercial license. Thus, the LLaMA family grows rapidly as these models are widely used by many research groups to develop better open-source LLMs to compete the closed-source ones or todeveloptask-specificLLMsformission-criticalapplications.
license. Thus, the LLaMA family grows rapidly as these models are widely used by many research groups to develop better open-source LLMs to compete the closed-source ones or todeveloptask-specificLLMsformission-criticalapplications. The first set of LLaMA models [32] was released in Febru- ary 2023, ranging from 7B to 65B parameters. These models are pre-trained on trillions of tokens, collected from publicly available datasets. LLaMA uses the transformer architecture of GPT-3, with a few minor architectural modifications, including (1) using a SwiGLU activation function instead of ReLU, (2) using rotary positional embeddings instead of absolute positional embedding, and (3) using root-mean-squared layer- Fig. 10: The high-level overview of RLHF. Courtesy of [59]. normalization instead of standard layer-normalization. The open-source LLaMA-13B model outperforms the proprietary GPT-3 (175B) model on most benchmarks, making it a good The most important milestone of LLM development is the baseline for LLM research. collected from ShareGPT. Preliminary evaluation using GPT- 4 as a evaluator shows that Vicuna-13B achieves more than 90% quality of OpenAI’s ChatGPT, and Google’s Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. 13 shows the relative response quality of Vicuna and a few other well-known models by GPT-4. Another advantage of Vicuna-13B is its relative limited computational demand for model training. The training cost of Vicuna-13B is merely $300. Fig. 11: GPT-4 performance on academic and professional Fig. 13: Relative Response Quality of Vicuna and a few other exams, compared with GPT 3.5. Courtesy of [33]. well-known models by GPT-4. Courtesy of Vicuna Team. Like Alpaca and Vicuna, the Guanaco models [63] are also In July 2023, Meta, in partnership with Microsoft, released finetunedLLaMAmodelsusinginstruction-followingdata.But the LLaMA-2 collection [61], which include both foundation the finetuning is done very efficiently using QLoRA such language models and Chat models fine
well-known models by GPT-4. Courtesy of Vicuna Team. Like Alpaca and Vicuna, the Guanaco models [63] are also In July 2023, Meta, in partnership with Microsoft, released finetunedLLaMAmodelsusinginstruction-followingdata.But the LLaMA-2 collection [61], which include both foundation the finetuning is done very efficiently using QLoRA such language models and Chat models finetuned for dialog, known that finetuning a 65B parameter model can be done on a as LLaMA-2 Chat. The LLaMA-2 Chat models were reported single 48GB GPU. QLoRA back-propagates gradients through to outperform other open-source models on many public a frozen, 4-bit quantized pre-trained language model into Low benchmarks. Fig 12 shows the training process of LLaMA-2 Rank Adapters (LoRA). The best Guanaco model outperforms Chat. The process begins with pre-training LLaMA-2 using all previously released models on the Vicuna benchmark, publicly available online data. Then, an initial version of reaching 99.3% of the performance level of ChatGPT while LLaMA-2 Chat is built via supervised fine-tuning. Subse- only requiring 24 hours of fine-tuning on a single GPU. quently, the model is iteratively refined using RLHF, rejection Koala [64] is yet another instruction-following language samplingandproximalpolicyoptimization.IntheRLHFstage, modelbuiltonLLaMA,butwithaspecificfocusoninteraction the accumulation of human feedback for revising the reward data that include user inputs and responses generated by highly model is crucial to prevent the reward model from being capable closed-source chat models such as ChatGPT. The changed too much, which could hurt the stability of LLaMA Koala-13B model performs competitively with state-of-the-art model training. chat models according to human evaluation based on real- world user prompts. Mistral-7B [65] is a 7B-parameter language model engi- neered for superior performance and efficiency. Mistral-7B outperforms the best open-source 13B model (LLaMA-2-13B) across all evaluated benchmarks, and the best open-source 34Bmodel(LLaMA-34B)inreasoning,mathematics,andcode generation. This model leverages grouped-query attention for faster inference, coupled with sliding window attention to effectively handle sequences of arbitrary length with a reduced inference cost. TheLLaMAfamilyisgrowingrapidly,asmoreinstruction- Fig. 12: Training of LLaMA-2 Chat. Courtesy of [61]. following models have been built on LLaMA or LLaMA- 2, including Code LLaMA [66], Gorilla [67], Giraffe [68], Vigogne
inference cost. TheLLaMAfamilyisgrowingrapidly,asmoreinstruction- Fig. 12: Training of LLaMA-2 Chat. Courtesy of [61]. following models have been built on LLaMA or LLaMA- 2, including Code LLaMA [66], Gorilla [67], Giraffe [68], Vigogne [69], Tulu 65B [70], Long LLaMA [71], and Stable Alpaca [62] is fine-tuned from the LLaMA-7B model using Beluga2 [72], just to name a few. 52K instruction-following demonstrations generated in the 3) The PaLM Family: The PaLM (Pathways Language style of self-instruct using GPT-3.5 (text-davinci-003). Alpaca Model) family are developed by Google. The first PaLM is very cost-effective for training, especially for academic model [31] was announced in April 2022 and remained private research. On the self-instruct evaluation set, Alpaca performs until March 2023. It is a 540B parameter transformer-based similarly to GPT-3.5, despite that Alpaca is much smaller. LLM. The model is pre-trained on a high-quality text corpus The Vicuna team has developed a 13B chat model, Vicuna- consisting of 780 billion tokens that comprise a wide range 13B, by fine-tuning LLaMA on user-shared conversations of natural language tasks and use cases. PaLM is pre-trainedon 6144 TPU v4 chips using the Pathways system, which [77]. Med-PaLM 2 scored up to 86.5% on the MedQA enables highly efficient training across multiple TPU Pods. dataset (i.e., a benchmark combining six existing open ques- PaLM demonstrates continued benefits of scaling by achiev- tion answering datasets spanning professional medical exams, ing state-of-the-art few-shot learning results on hundreds of research, and consumer queries), improving upon Med-PaLM language understanding and generation benchmarks. PaLM- by over 19% and setting a new state-of-the-art. 540B outperforms not only state-of-the-art fine-tuned models on a suite of multi-step reasoning tasks, but also on par with C. Other Representative LLMs humans on the recently released BIG-bench benchmark. The U-PaLM models of 8B, 62B, and 540B scales are In addition to the models discussed in the previous sub- continually trained on PaLM with UL2R, a method of continue sections, there are other popular LLMs which do not belong training LLMs on a few steps with UL2’s mixture-of-denoiser to those three model families, yet they have achieved great objective [73]. An approximately 2x computational savings performance and have pushed the LLMs field forward. We rate is reported. briefly describe these LLMs in this subsection. U-PaLM is later instruction-finetuned as Flan-PaLM [74]. FLAN: In [78], Wei et al. explored a simple method for Compared to other instruction finetuning work mentioned improving the zero-shot learning abilities of language models. above, Flan-PaLM’s finetuning is performed using a much They showed that instruction tuning language models on a larger number of tasks, larger model sizes, and chain-of- collection of datasets described via instructions substantially thought data. As a result, Flan-PaLM substantially outperforms improves zero-shot performance on unseen tasks. They take previous instruction-following models. For instance, Flan- a 137B parameter pretrained language model and instruction PaLM-540B, which is instruction-
They showed that instruction tuning language models on a larger number of tasks, larger model sizes, and chain-of- collection of datasets described via instructions substantially thought data. As a result, Flan-PaLM substantially outperforms improves zero-shot performance on unseen tasks. They take previous instruction-following models. For instance, Flan- a 137B parameter pretrained language model and instruction PaLM-540B, which is instruction-finetuned on 1.8K tasks, tuneitonover60NLPdatasetsverbalizedvianaturallanguage outperforms PaLM-540B by a large margin (+9.4% on av- instruction templates. They call this instruction-tuned model erage). The finetuning data comprises 473 datasets, 146 task FLAN. Fig 15 provides a comparison of instruction tuning categories, and 1,836 total tasks, as illustrated in Fig 14. with pretrain–finetune and prompting. Fig. 15: comparison of instruction tuning with pre- train–finetune and prompting. Courtesy of [78]. Gopher: In [79], Rae et al. presented an analysis of Transformer-based language model performance across a wide range of model scales — from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. Fig. 14: Flan-PaLM finetuning consist of 473 datasets in above These models were evaluated on 152 diverse tasks, achieving task categories. Courtesy of [74]. state-of-the-art performance across the majority. The number of layers, the key/value size, and other hyper-parameters of different model sizes are shown in Fig 16. PaLM-2 [75] is a more compute-efficient LLM with bet- ter multilingual and reasoning capabilities, compared to its predecessor PaLM. PaLM-2 is trained using a mixture of objectives. Through extensive evaluations on English, multi- lingual, and reasoning tasks, PaLM-2 significantly improves the model performance on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference than PaLM. Med-PaLM [76] is a domain-specific PaLM, and is de- Fig. 16: Model architecture details of Gopher with different signed to provide high-quality answers to medical questions. number of parameters. Courtesy of [78]. Med-PaLM is finetuned on PaLM using instruction prompt tuning, a parameter-efficient method for aligning LLMs to new domains using a few exemplars. Med-PaLM obtains very T0: In [80], Sanh et al. developed T0, a system for easily encouraging results on many healthcare tasks, although it is mapping any natural language tasks into a human-readable still inferior to human clinicians. Med-PaLM 2 improves Med- prompted form. They converted a large set of supervised PaLM via med-domain finetuning and ensemble prompting datasets, each with multiple prompts with diverse wording.These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. Then, a T0 encoder-decoder model is developed to consume textual inputs and produces target responses. The model is trained on a multitask mixture of NLP datasets partitioned into different t
prompted form. They converted a large set of supervised PaLM via med-domain finetuning and ensemble prompting datasets, each with multiple prompts with diverse wording.These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. Then, a T0 encoder-decoder model is developed to consume textual inputs and produces target responses. The model is trained on a multitask mixture of NLP datasets partitioned into different tasks. ERNIE 3.0: In [81], Sun et al. proposed a unified frame- work named ERNIE 3.0 for pre-training large-scale knowledge enhanced models. It fuses auto-regressive network and auto- encoding network, so that the trained model can be easily tai- Fig. 18: Retro architecture. Left: simplified version where a lored for both natural language understanding and generation sequence of length n = 12 is split into l = 3 chunks of size tasksusingzero-shotlearning,few-shotlearningorfine-tuning. m = 4. For each chunk, we retrieve k = 2 neighbours of r = They have trained ERNIE 3.0 with 10 billion parameters 5 tokens each. The retrieval pathway is shown on top. Right: on a 4TB corpus consisting of plain texts and a large-scale Details of the interactions in the CCA operator. Causality is knowledge graph. Fig 17 illustrates the model architecture of maintained as neighbours of the first chunk only affect the last Ernie 3.0. token of the first chunk and tokens from the second chunk. Courtesy of [82]. Fig. 17: High-level model architecture of ERNIE 3.0. Courtesy of [81]. RETRO: In [82], Borgeaud et al. enhanced auto-regressive Fig. 19: GLaM model architecture. Each MoE layer (the language models by conditioning on document chunks re- bottom block) is interleaved with a Transformer layer (the trieved from a large corpus, based on local similarity with pre- upper block). Courtesy of [84]. ceding tokens. Using a 2-trillion-token database, the Retrieval- Enhanced Transformer (Retro) obtains comparable perfor- mance to GPT-3 and Jurassic-1 [83] on the Pile, despite using They showed that fine-tuning with annotated data and enabling 25% fewer parameters. As shown in Fig 18, Retro combines the model to consult external knowledge sources can lead to a frozen Bert retriever, a differentiable encoder and a chunked significant improvements towards the two key challenges of cross-attention mechanism to predict tokens based on an order safety and factual grounding. of magnitude more data than what is typically consumed during training. OPT: In [86], Zhang et al. presented Open Pre-trained GLaM: In [84], Du et al. proposed a family of LLMs Transformers (OPT), a suite of decoder-only pre-trained trans- named GLaM (Generalist Language Model), which use a formers ranging from 125M to 175B parameters, which they sparsely activated mixture-of-experts architecture to scale the share with researchers. The OPT models’ parameters are model capacity while also incurring substantially less training shown in 20 cost compared to dense variants. The largest GLaM has 1.2 trillionparameters,whichisapproximately7xlargerthanGPT- 3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero, one and few-shot performance across 29 NLP tasks. Fig 19 shows the high-level architecture of GLAM. LaMDA: In [85], Thoppilan et al. presented LaMDA, a family of Transformer-based n
bstantially less training shown in 20 cost compared to dense variants. The largest GLaM has 1.2 trillionparameters,whichisapproximately7xlargerthanGPT- 3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero, one and few-shot performance across 29 NLP tasks. Fig 19 shows the high-level architecture of GLAM. LaMDA: In [85], Thoppilan et al. presented LaMDA, a family of Transformer-based neural language models special- Fig. 20: Different OPT Models’ architecture details. Courtesy ized for dialog, which have up to 137B parameters and are of [86]. pre-trained on 1.56T words of public dialog data and web text. Chinchilla: In [2], Hoffmann et al. investigated the optimal model size and number of tokens for training a transformer language model under a given compute budget. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, they found that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. They tested this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and Fig. 21: Sparrow pipeline relies on human participation to 4% more more data. continually expand a training set. Courtesy of [90]. Galactica: In [87], Taylor et al. introduced Galactica, a large language model that can store, combine and reason about scientific knowledge. They trained on a large scientific corpus effective. They proposed Mixture-of-Denoisers (MoD), a pre- of papers, reference material, knowledge bases and many other training objective that combines diverse pre-training paradigms sources.Galacticaperformedwellonreasoning,outperforming together. This framework is known as Unifying Language Chinchilla on mathematical MMLU by 41.3% to 35.7%, and Learning (UL2). An overview of UL2 pretraining paradigm PaLM 540B on MATH with a score of 20.4% versus 8.8%. is shown in Fig 21. CodeGen: In [88], Nijkamp et al. trained and released a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open sourced the training library JAX- FORMER. They showed the utility of the trained model by demonstrating that it is competitive with the previous state-of- the-art on zero-shot Python code generation on HumanEval. They further investigated the multi-step paradigm for program synthesis, where a single program is factorized into multi- ple prompts specifying sub-problems. They also constructed an open benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse problem sets that are factorized into multi-turn prompts. AlexaTM: In [89], Soltan et al. demonstrated that mul- Fig. 22: An overview of UL2 pretraining paradigm. Courtesy tilingual large-scale sequence-to-sequence (seq2seq) models, of [92]. pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various task. They trained a BLOOM: In [93], Scao et al. presented BLOOM, a 176B- 20 billion parameter multilingual seq2seq model called Alexa parameter open-access language model designed and built Teacher Model (AlexaTM 20B) and showed that it achieves thanks to a collaboration of hundreds of researchers. BLOOM state-of-the-art (SOTA) performance on 1-shot summarization is a decoder-only
decoder-only models on various task. They trained a BLOOM: In [93], Scao et al. presented BLOOM, a 176B- 20 billion parameter multilingual seq2seq model called Alexa parameter open-access language model designed and built Teacher Model (AlexaTM 20B) and showed that it achieves thanks to a collaboration of hundreds of researchers. BLOOM state-of-the-art (SOTA) performance on 1-shot summarization is a decoder-only Transformer language model trained on the tasks, outperforming a much larger 540B PaLM decoder ROOTS corpus, a dataset comprising hundreds of sources in model. AlexaTM consist of 46 encoder layers, 32 decoder 46 natural and 13 programming languages (59 in total). An layers, 32 attention heads, and dmodel =4096 . overview of BLOOM architecture is shown in Fig 23. Sparrow: In [90], Glaese et al. presented Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. They used reinforcement learning from human feed- back to train their models with two new additions to help human raters judge agent behaviour. The high-level pipeline of Sparrow model is shown in Fig 21. Minerva: In [91], Lewkowycz et al. introduced Minerva, a large language model pretrained on general natural language data and further trained on technical content, to tackle previous LLM struggle with quantitative reasoning (such as solving mathematics, science, and engineering problems). Fig. 23: An overview of BLOOM architecture. Courtesy of MoD: In [92], Tay et al. presented a generalized and [93]. unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be GLM: In [94], Zeng et al. introduced GLM-130B, abilingual (English and Chinese) pre-trained language model FuseLLM-7B [120], TinyLlama-1.1B [121], LLaMA-Pro-8B with 130 billion parameters. It was an attempt to open-source [122]. a 100B-scale model at least as good as GPT-3 (davinci) and Fig 24 provides an overview of some of the most repre- unveil how models of such a scale can be successfully pre- sentative LLM frameworks, and the relevant works that have trained. contributed to the success of LLMs and helped to push the Pythia: In [95], Biderman et al. introduced Pythia, a suite limits of LLMs. of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the III. HOW LLMS ARE BUILT 16 models, alongside tools to download and reconstruct their In this section, we first review the popular architectures exact training dataloaders for further study. used for LLMs, and then discuss data and modeling techniques Orca: In [96], Mukherjee et al. develop Orca, a 13-billion ranging from data preparation, tokenization, to pre-training, parameter model that learns to imitate the reasoning process instruction tuning, and alignment. of large foundation models. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought Once the model architecture is chosen, the major steps processes; and other complex instructions, guided by teacher involved in training an LLM includes: data preparation (col- assistance from ChatGPT. lection, cleaning, deduping, etc.
instruction tuning, and alignment. of large foundation models. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought Once the model architecture is chosen, the major steps processes; and other complex instructions, guided by teacher involved in training an LLM includes: data preparation (col- assistance from ChatGPT. lection, cleaning, deduping, etc.), tokenization, model pre- StarCoder: In [97], Li et al. introduced StarCoder and training (in a self-supervised learning fashion), instruction StarCoderBase. They are 15.5B parameter models with 8K tuning, and alignment. We will explain each of them in a context length, infilling capabilities and fast large-batch in- separate subsection below. These steps are also illustrated in ference enabled by multi-query attention. StarCoderBase is Fig 25. trained on one trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories A. Dominant LLM Architectures with inspection tools and an opt-out process. They fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation The most widely used LLM architectures are encoder-only, of StarCoder. They performed the most comprehensive evalu- decoder-only, and encoder-decoder. Most of them are based on ation of Code LLMs to date and showed that StarCoderBase Transformer (as the building block). Therefore we also review outperforms every open Code LLM that supports multiple pro- the Transformer architecture here. gramming languages and matches or outperforms the OpenAI 1) Transformer: in a ground-breaking work [44], Vaswani code-cushman-001 model. et al. proposed the Transformer framework, which was orig- KOSMOS: In [98], Huang et al. introduced KOSMOS-1, inally designed for effective parallel computing using GPUs. a Multimodal Large Language Model (MLLM) that can per- The heart of Transformer is the (self-)attention mechanism, ceive general modalities, learn in context (i.e., few-shot), and which can capture long-term contextual information much follow instructions (i.e. zero-shot). Specifically, they trained more effectively using GPUs than the recurrence and convo- KOSMOS-1 from scratch on web-scale multi-modal corpora, lution mechanisms. Fig 26 provides a high-level overview of includingarbitrarilyinterleavedtextandimages,image-caption transformerwork.Inthissectionweprovideanoverviewofthe pairs, and text data. Experimental results show that KOSMOS- main elements and variants, see [44], [123] for more details. 1 achieves impressive performance on (i) language understand- ing, generation, and even OCR-free NLP (directly fed with The Transformer language model architecture, originally document images), (ii) perception-language tasks, including proposed for machine translation, consists of an encoder and multimodal dialogue, image captioning, visual question an- a decoder. The encoder is composed of a stack of N = 6 swering, and (iii) vision tasks, such as image recognition with identical Transformer layers. Each layer has two sub-layers. descriptions (specifying classification via text instructions). The first one is a multi-head self-attention layer, and the other Gemini: In [99], Gemini team introduced a new family of one is a simple position-wise fully connected feed-forward multimodal models, that exhibit promising capabilities across
N = 6 swering, and (iii) vision tasks, such as image recognition with identical Transformer layers. Each layer has two sub-layers. descriptions (specifying classification via text instructions). The first one is a multi-head self-attention layer, and the other Gemini: In [99], Gemini team introduced a new family of one is a simple position-wise fully connected feed-forward multimodal models, that exhibit promising capabilities across network. The decoder is composed of a stack of 6 identical image, audio, video, and text understanding. Gemini family layers. In addition to the two sub-layers in each encoder layer, includes three versions: Ultra for highly-complex tasks, Pro the decoder has a third sub-layer, which performs multi-head for enhanced performance and deployability at scale, and Nano attention over the output of the encoder stack. The attention for on-device applications. Gemini architecture is built on top function can be described as mapping a query and a set of key- of Transformer decoders, and is trained to support 32k context value pairs to an output, where the query, keys, values, and length (via using efficient attention mechanisms). output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value Some of the other popular LLM frameworks (or techniques is computed by a compatibility function of the query with the used for efficient developments of LLMs) includes Inner- corresponding key. Instead of performing a single attention Monologue [100], Megatron-Turing NLG [101], LongFormer function with dmodel dimensional keys, values and queries, [102], OPT-IML [103], MeTaLM [104], Dromedary [105], it is found to be beneficial to linearly project the queries, Palmyra [106], Camel [107], Yalm [108], MPT [109], ORCA- keys and values h with different, learned linear projections to 2 [110], Gorilla [67], PAL [111], Claude [112], CodeGen 2 dk, dk and dv dimensions, respectively. Positional encoding is [113], Zephyr [114], Grok [115], Qwen [116], Mamba [30], incorporated to fuse information about the relative or absolute Mixtral-8x7B [117], DocLLM [118], DeepSeek-Coder [119], position of the tokens in the sequence.Fig. 24: Timeline of some of the most representative LLM frameworks (so far). In addition to large language models with our #parameters threshold, we included a few representative works, which pushed the limits of language models, and paved the way for their success (e.g. vanilla Transformer, BERT, GPT-1), as well as some small language models. ♣ shows entities that serve not only as models but also as approaches. ♦ shows only approaches. 2) Encoder-Only: For this family, at each stage, the atten- mask word replaces. Encoder-decoder models are best suited tion layers can access all the words in the initial sentence. for tasks about generating new sentences conditioned on a The pre-training of these models usually consist of some- given input, such as summarization, translation, or generative how corrupting a given sentence (for instance, by masking question answering. random words in it) and tasking the model with finding or reconstructing the initial sentence. Encoder models are great B. Data Cleaning for tasks requiring an understanding of the full sequence, such as sentence classification, named entity recognition, and Data quality is crucial to the performance of language extractive question answering. One promine
corrupting a given sentence (for instance, by masking question answering. random words in it) and tasking the model with finding or reconstructing the initial sentence. Encoder models are great B. Data Cleaning for tasks requiring an understanding of the full sequence, such as sentence classification, named entity recognition, and Data quality is crucial to the performance of language extractive question answering. One prominent encoder only models trained on them. Data cleaning techniques such as model is BERT (Bidirectional Encoder Representations from filtering, deduplication, are shown to have a big impact on Transformers), proposed in [24]. the model performance. 3) Decoder-Only: For these models, at each stage, for any As an example, in Falcon40B [124], Penedo et al. showed word, the attention layers can only access the words positioned that properly filtered and deduplicated web data alone can lead before that in the sentence. These models are also sometimes to powerful models; even significantly outperforming models called auto-regressive models. The pretraining of these models from the state-of-the-art trained on The Pile. Despite extensive is usually formulated as predicting the next word (or token) filtering, they were able to obtain five trillion tokens from in the sequence. The decoder-only models are best suited for CommonCrawl. They also released an extract of 600 billion tasks involving text generation. GPT models are prominent tokens from our REFINEDWEB dataset, and 1.3/7.5B param- example of this model category. eters language models trained on it. 27 shows the Refinement 4) Encoder-Decoder: These models use both encoder and process of CommonCrawl data by this work. decoder, and are sometimes called sequence-to-sequence mod- 1) Data Filtering: Data filtering aims to enhance the qual- els.Ateachstage,theattentionlayersoftheencodercanaccess ity of training data and the effectiveness of the trained LLMs. all the words in the initial sentence, whereas the attention Common data filtering techniques include: layersofthedecoderonlyaccessesthewordspositionedbefore a given word in the input. These models are usually pre- Removing Noise: refers to eliminating irrelevant or noisy trained using the objectives of encoder or decoder models, but data that might impact the model’s ability to generalize well. usually involve something a bit more complex. For instance, As an example, one can think of removing false information some models are pretrained by replacing random spans of text from the training data, to lower the chance of model generating (that can contain several words) with a single mask special false responses. Two mainstream approaches for quality filter- word, and the objective is then to predict the text that this ing includes: classifier-based, and heuristic-based frameworks.Fig. 25: This figure shows different components of LLMs. biasesinthemodeltrainingprocessandreducethediversity,as the model may learn from the same examples multiple times, potentially leading to overfitting on those particular instances. Some works [125] have shown that de-duplication improves
biasesinthemodeltrainingprocessandreducethediversity,as the model may learn from the same examples multiple times, potentially leading to overfitting on those particular instances. Some works [125] have shown that de-duplication improves models’ ability to generalize to new, unseen data. The de-duplication process is particularly important when dealing with large datasets, as duplicates can unintentionally inflate the importance of certain patterns or characteristics. This is especially relevant in NLP tasks, where diverse and representative training data is crucial for building robust lan- guage models. The specific de-duplication method can vary based on the nature of the data and the requirements of the particular language model being trained. It may involve comparing entire data points or specific features to identify and eliminate du- plicates. At the document level, existing works mainly rely on the overlap ratio of high-level features (e.g. n-grams overlap) between documents to detect duplicate samples. C. Tokenizations Fig. 26: High-level overview of transformer work. Courtesy of Tokenization referes to the process of converting a se- [44]. quence of text into smaller parts, known as tokens. While the simplest tokenization tool simply chops text into tokens based on white space, most tokenization tools rely on a word dictionary. However, out-of-vocabulary (OOV) is a problem in this case because the tokenizer only knows words in its dictionary. To increase the coverage of dictionaries, popular tokenizers used for LLMs are based on sub-words, which can be combined to form a large number of words, including the words unseen in training data or words in different languages. In what follows, we describe three popular tokenizers.
tokenizers used for LLMs are based on sub-words, which can be combined to form a large number of words, including the words unseen in training data or words in different languages. In what follows, we describe three popular tokenizers. 1) BytePairEncoding: BytePairEncoding is originally a type of data compression algorithm that uses frequent patterns at byte level to compress the data. By definition, this algorithm mainly tries to keep the frequent words in their original form Fig. 27: Subsequent stages of Macrodata Refinement remove and break down ones that are not common. This simple nearly 90% of the documents originally in CommonCrawl. paradigm keeps the vocabulary not very large, but also good Courtesy of [124]. enough to represent common words at the same time. Also morphological forms of the frequent words can be represented very well if suffix or prefix is also commonly presented in the training data of the algorithm. Handling Outliers: Identifying and handling outliers or 2) WordPieceEncoding: This algorithm is mainly used for anomalies in the data to prevent them from disproportionately very well-known models such as BERT and Electra. At the influencing the model. beginning of training, the algorithm takes all the alphabet from Addressing Imbalances: Balancing the distribution of the training data to make sure that nothing will be left as UNK classes or categories in the dataset to avoid biases and ensure or unknown from the training dataset. This case happens when fair representation. This is specially useful for responsible the model is given an input that can not be tokenized by the model training and evaluation. tokenizer.Itmostlyhappensincaseswheresomecharactersare Text Preprocessing: Cleaning and standardizing text data not tokenizable by it. Similar to BytePairEncoding, it tries to by removing stop words, punctuation, or other elements that maximize the likelihood of putting all the tokens in vocabulary may not contribute significantly to the model’s learning. based on their frequency. Dealing with Ambiguities: Resolving or excluding am- 3) SentencePieceEncoding: Although both tokenizers de- biguous or contradictory data that might confuse the model scribed before are strong and have many advantages compared during training. This can help the model to provide more to white-space tokenization, they still take assumption of definite and reliable answers. words being always separated by white-space as granted. This assumptionisnotalwaystrue,infactinsomelanguages,words 2) Deduplication: De-duplication refers to the process of can be corrup
training. This can help the model to provide more to white-space tokenization, they still take assumption of definite and reliable answers. words being always separated by white-space as granted. This assumptionisnotalwaystrue,infactinsomelanguages,words 2) Deduplication: De-duplication refers to the process of can be corrupted by many noisy elements such as unwanted removing duplicate instances or repeated occurrences of the spaces or even invented words. SentencePieceEncoding tries same data in a dataset. Duplicate data points can introduce to address this issue.D. Positional Encoding In Autoregressive Language Modeling framework, given 1) Absolute Positional Embeddings: (APE) [44] has been a sequence of n tokens x 1 , ..., x n , the model tries to predict used in the original Transformer model to preserve the infor- next token x n +1 (and sometimes next sequence of tokens) in mationofsequenceorder.Therefore,thepositionalinformation an auto-regressive fashion. One popular loss function in this of words is added to the input embeddings at the bottom of case is the log-likelihood of predicted tokens as shown in Eq both the encoder and decoder stacks. There are various options 2 NX for positional encodings, either learned or fixed. In the vanilla L ALM (x)= p(x i+ n|x i,...,xi+ n− 1) (1) Transformer, sine and cosine functions are employed for this i=1 purpose. The main drawback of using APE in Transformers Given the auto-regressive nature of this framework, the is the restriction to a certain number of tokens. Additionally, decoder-only models are naturally better suited to learn how APE fails to account for the relative distances between tokens. to accomplish these task. 2) Relative Positional Embeddings: (RPE) [126] involves In Masked Language Modeling, some words are masked extending self-attention to take into account the pairwise links in a sequence and the model is trained to predict the masked between input elements. RPE is added to the model at two words based on the surrounding context. Sometimes people levels: first as an additional component to the keys, and refer to this approach as denoising autoencoding, too. If we subsequently as a sub-component of the values matrix. This denote the masked/corrupted samples in the sequence x , as ˜x , approach looks at the input as a fully-connected graph with then the training objective of this approach can be written as: labelsanddirectededges.Inthecaseoflinearsequences,edges can capture information about the relative position differences NX between input elements. A clipping distance, represented as k L MLM (x)= p(˜x|x\˜x) (2) 2 ≤ k ≤ n − 4, specifies the maximum limit on relative lo- i=1 cations. This allows the model to make reasonable predictions And more recently, Mixture of Experts (MoE) [130], for sequence lengths that are not part of the training data. [131] have become very popular in LLM space too. MoEs 3) Rotary Position Embeddings: Rotary Positional Em- enable models to be pre-trained with much less compute, bedding (RoPE) [127] tackles problems with existing a
i=1 cations. This allows the model to make reasonable predictions And more recently, Mixture of Experts (MoE) [130], for sequence lengths that are not part of the training data. [131] have become very popular in LLM space too. MoEs 3) Rotary Position Embeddings: Rotary Positional Em- enable models to be pre-trained with much less compute, bedding (RoPE) [127] tackles problems with existing ap- which means one can dramatically scale up the model or proaches. Learned absolute positional encodings can lack gen- dataset size with the same compute budget as a dense model. eralizability and meaningfulness, particularly when sentences MoE consists of two main elements: Sparse MoE layers, are short. Moreover, current methods like T5’s positional which are used instead of dense feed-forward network (FFN) embedding face challenges with constructing a full attention layers, and have a certain number of “experts” (e.g. 8), in matrix between positions. RoPE uses a rotation matrix to which each expert is a neural network. In practice, the experts encode the absolute position of words and simultaneously in- are FFNs, but they can also be more complex networks. A gate cludes explicit relative position details in self-attention. RoPE network or router, that determines which tokens are sent to brings useful features like flexibility with sentence lengths, a which expert. It is worth noting that, one can send a token decrease in word dependency as relative distances increase, to more than one expert. How to route a token to an expert and the ability to improve linear self-attention with relative is one of the big decisions when working with MoEs - the position encoding. GPT-NeoX-20B, PaLM, CODEGEN, and router is composed of learned parameters and is pretrained at LLaMA are among models that take advantage of RoPE in the same time as the rest of the network. Fig 29 provides an their architectures. illustration of a Switch Transformer encoder block, which are 4) Relative Positional Bias: The concept behind this type used in MoE. of positional embedding is to facilitate extrapolation during F. Fine-tuning and Instruction Tuning inference for sequences longer than those encountered in train- ing. In [128] Press et al. proposed Attention with Linear Biases Early language models such as BERT trained using self- (ALiBi). Instead of simply adding positional embeddings to supervision as explained in section III-E were not able to wordembeddings,theyintroducedabiastotheattentionscores perform specific tasks. In order for the foundation model to be of query-key pairs, imposing a penalty proportional to their useful it needed to be fine-tuned to a specific task with labeled distance. In the BLOOM model, ALiBi is leveraged. data (so-called supervised fine-tuning or SFT for short). For example, in the original BERT paper [24], the model was fine- E. Model Pre-training tuned to 11 different tasks. While more recent LLMs no longer require fine-tuning to be used, they can still benefit from task Pre-training is the very first step in large language model or data-specific fine-tuning. For example, OpenAI reports that training pipeline, and it helps LLMs to acquire fundamental
Model Pre-training tuned to 11 different tasks. While more recent LLMs no longer require fine-tuning to be used, they can still benefit from task Pre-training is the very first step in large language model or data-specific fine-tuning. For example, OpenAI reports that training pipeline, and it helps LLMs to acquire fundamental the much smaller GPT-3.5 Turbo model can outperform GPT-4 language understanding capabilities, which can be useful in a when fine-tuned with task specific data 2. wide range of language related tasks. During pre-training, the Fine-tuning does not need to be performed to a single LLM is trained on a massive amount of (usually) unlabeled task though, and there are different approaches to multi-task texts, usually in a self-supervised manner. There are different fine-tuning (see e.g. Mahabi et al. [132]). Fine-tuning to one approaches used for pre-training like next sentence prediction or more tasks is known to improve results and reduce the [24], two most common ones include, next token prediction complexity of prompt engineering, and it can serve as an (autoregressive language modeling), and masked language modeling. 2https://platform.openai.com/docs/guides/fine-tuning (a) Absolute Positional Embeddings [129] (b) Relative Positional Embeddings (c) Rotary Positional Embedding [127] (d) Relative Positional Bias [128] Fig. 28: Various positional encodings are employed in LLMs. Instructions [134] include not only the task definition but other components such as positive/negative examples or things to avoid. The specific approach and instruction datasets used to instruction-tune an LLM varies, but, generally speaking, in- struction tuned models outperform their original foundation models they are based on. For example, InstructGPT [59] outperforms GPT-3 on most benchmarks. The same is true for Alpaca [62] when compared to LLaMA. Self-Instruct [135], proposed by Wang et al. is also a popular approach along this line, in which they introduced a Fig. 29: : Illustration of a Switch Transformer encoder block. framework for improving the instruction-following capabilities They replaced the dense feed forward network (FFN) layer of pre-trained language models by bootstrapping their own present in the Transformer with a sparse Switch FFN layer generations. Their pipeline generates instructions, input, and (light blue). . Courtesy of [131]. output samples from a language model, then filters invalid or
improving the instruction-following capabilities They replaced the dense feed forward network (FFN) layer of pre-trained language models by bootstrapping their own present in the Transformer with a sparse Switch FFN layer generations. Their pipeline generates instructions, input, and (light blue). . Courtesy of [131]. output samples from a language model, then filters invalid or similar ones before using them to fine tune the original model. G. Alignment alternative to retrieval augmented generation. Furthermore, AI Alignment is the process of steering AI systems towards there are other reasons why it might be advisable to fine-tune. human goals, preferences, and principles. LLMs, pre-trained For example, one might want to fine-tune to expose the model for word prediction, often exhibit unintended behaviors. For to new or proprietary data that it has not been exposed to example, they might generate contents that are toxic, harmful, during pre-training. misleading and biased. An important reason to fine-tune LLMs is to align the Instruction tuning, discussed above, gets LLMs a step responsestotheexpectationshumanswillhavewhenproviding closertobeingaligned.However,inmanycases,itisimportant instructions through prompts. This is the so-called instruction to include further steps to improve the alignment of the model tuning [133]. We dive into the details of how to design and avoid unintended behaviors 3. We review the most popular and engineer prompts in section IV-B, but in the context of instruction tuning, it is important to understand that the 3According to very recent research by Ethayarajh et al. [136], further instruction is a prompt that specifies the task that the LLM alignment besides SFT mainly improves models of at least 7B parameters. should accomplish. Instruction tuning datasets such as Natural For smaller models, SFT is sufficient.approaches to alignment in this subsection. yw ). Fig 31 shows a high-level comparison between KTO and RLHF (reinforcement learning from human feedback) and other alignment approaches discussed above. RLAIF (reinforcement learning from AI feedback) are two popular approaches. RLHF uses a reward model to learn alignment from human feedback. This reward model, after being tuned, is able to rate different outputs and score them according to their alignment preferences given by humans. The reward model gives feedback to the original LLM and this feedback is used to tune the LLM further [137]. Reinforcement learning from AI feedback on the other hand, directly connects a pretrained and well-aligned model to the LLM and helps it to learn from larger and more aligned models [138]. In another recent work (known as DPO) [139], Rafailov Fig. 31: LLM alignment involves supervised finetuning fol- et al. discussed that RLHF is a complex and often unstable lowed by optimizing a human-centered loss (HALO). How- procedure, and tried to address this with a new approach. They ever, the paired preferences that existing approaches need are leveraged a mapping between reward functions and optimal hard-to-obtain. In contrast, KTO uses a far more abundant policies to show that this constrained reward maximization kind of data, making it much easier to use in the real world. problem can be optimized exa
wed by optimizing a human-centered loss (HALO). How- procedure, and tried to address this with a new approach. They ever, the paired preferences that existing approaches need are leveraged a mapping between reward functions and optimal hard-to-obtain. In contrast, KTO uses a far more abundant policies to show that this constrained reward maximization kind of data, making it much easier to use in the real world. problem can be optimized exactly with a single stage of policy Courtesy of [136]. training, essentially solving a classification problem on the human preference data. The resulting algorithm, which they called Direct Preference Optimization (DPO), is stable, per- formant,andcomputationallylightweight,eliminatingtheneed H. Decoding Strategies for fitting a reward model, sampling from the LM during fine- Decoding refers to the process of text generation using pre- tuning, or performing significant hyperparameter tuning. They trained LLMs. Given an input prompt, the tokenizer translates observed that fine-tuning with DPO exceeds RLHF’s ability to each token in the input text into a corresponding token ID. controlsentimentofgenerationsandimprovesresponsequality Then, the language model uses these token IDs as input and in summarization. Fig 30 shows the high-level comparison predicts the next most likely token (or a sequence of tokens). between DPO vs RLHF. Finally, the model generates logits, which are converted to probabilities using a softmax function. Different decoding strategies have been proposed. Some of the most popular ones are greedy search, beam search, as well as different sample techniques such as top-K, top-P (Nucleus sampling). 1) Greedy Search: Greedy search takes the most probable Fig. 30: DPO optimizes for human preferences while avoiding token at each step as the next token in the sequence, discarding reinforcement learning. Existing methods for fine-tuning lan- all other potential options. As you can imagine, this is a simple guage models with human feedback first fit a reward model approach and can loose a lot of temporal consistency and to a dataset of prompts and human preferences over pairs of coherency. It only considers the most probable token at each responses, and then use RL to find a policy that maximizes step, without considering the overall effect on the sequence. the learned reward. In contrast, DPO directly optimizes for This property makes it fast, but it also means that it can miss the policy best satisfying the preferences with a simple classi- out on better sequences that might have appeared with slightly fication objective, without an explicit reward function or RL. less probable next tokens. Courtesy of [139]. 2) Beam Search: Unlike greedy search that only considers the next most probable token, beam search takes into account the N most likely tokens, where N denotes the number of Even more recently Ethayarajh et al. proposed a new align- beams. This procedure is repeated until a predefined maxi- men
2) Beam Search: Unlike greedy search that only considers the next most probable token, beam search takes into account the N most likely tokens, where N denotes the number of Even more recently Ethayarajh et al. proposed a new align- beams. This procedure is repeated until a predefined maxi- ment approach called the Kahneman-Tversky Optimization mum sequence length is reached or an end-of-sequence token (KTO) [136]. Unlike existing state-of-the-art approaches, KTO appears. At this point, the sequence of tokens (AKA “beam”) does not require paired preference data (x , yw , yl), and it with the highest overall score is chosen as the output. For only needs (x,y) and knowledge of whether y is desirable or example for beam size of 2 and maximum length of 5, undesirable. KTO-aligned models are shown to be good or the beam search needs to keep track of 2 5 = 32 possible better than DPO-aligned models at scales from 1B to 30B, sequences. So it is more computationally intensive than greedy despite not using paired preferences. KTO is also far easier to search. use in the real world than preference optimization methods, as 3) Top-k Sampling: Top-k sampling is a technique that the kind of data it needs is far more abundant. As an example, uses the probability distribution generated by the language every retail company has a lot of customer interaction data and model to select a token randomly from the k most likely whether that interaction was successful (e.g., purchase made) options. or unsuccessful (e.g., no purchase made). However, They have little to no counterfactual data (i.e., what would have made Suppose we have 6 tokens (A, B, C, D, E, F) and k=2, an unsuccessful customer interaction yl into a successful one and P(A)= 30%, and P(B)= 20%, P(C)= P(D)= P(E)= P(F)=12.5%. In top-k sampling, tokens C, D, E, F are disregarded, RWKV: In [141], Peng et al. proposed a novel model and the model outputs A 60% of the time, and B, 40% of architecture, Receptance Weighted Key Value (RWKV), that the time. This approach ensures that we prioritize the most combines the efficient parallelizable training of Transformers probable tokens while introducing an element of randomness with the efficient inference of RNNs. Their approach leverages in the selection process. a linear attention mechanism and allows them to formulate the The randomness is usually introduced via the concept of model as either a Transformer or an RNN, which parallelizes temperature.ThetemperatureTisaparameterthatrangesfrom computations during training and maintains constant compu- 0 to 1, which affects the probabilities generated by the softmax tational and memory complexity during inference, leading to function, making the most likely tokens more influential. In the first non-transformer architecture to be scaled to tens of practice, it simply consists of dividing the input logits by billions of parameters. RWKV architecture is shown in Fig temperature value: 32. The Time Complexity comparison of RWKV with different softmax (x i)= ex i/TP j ex j/T (3) A low temperature setting significantly alters the proba- bility distribution (and is commonly used i
, it simply consists of dividing the input logits by billions of parameters. RWKV architecture is shown in Fig temperature value: 32. The Time Complexity comparison of RWKV with different softmax (x i)= ex i/TP j ex j/T (3) A low temperature setting significantly alters the proba- bility distribution (and is commonly used in text generation to control the level of “creativity” in the generated output), while a large temperature prioritizes the tokens with higher probabilities. Top-k is a creative way of sampling, and can be used along with beam search. The sequence chosen by top- k sampling may not be the sequence with highest probability in beam search. But it’s important to remember that highest scores do not always lead to more realistic or meaningful sequences. 4) Top-p Sampling: Top-p sampling, also known as Nu- cleus sampling, takes a slightly different approach from top-k sampling. Instead of selecting the top k most probable tokens, nucleus sampling chooses a cutoff value p such that the sum of the probabilities of the selected tokens exceeds p. This forms a “nucleus” of tokens from which to randomly choose the next token. In other words, in top-p sampling the language model examines the most probable tokens in descending order and Fig. 32: RWKV architecture. Courtesy of [141]. keeps adding them to the list until the sum of probabilities surpasses the threshold p. As you can imagine, this could be better specially for scenarios in which top-k tokens do not have Transformers are provided in Fig 33. a large probability mass. Unlike top-k sampling, the number of tokens included in the nucleus sampling is not fixed. This variability often results in a more diverse and creative output, making nucleus sampling popular for text generation related tasks. I. Cost-Effective Training/Inference/Adaptation/Compression In this part, we review some of the popular approaches used for more cost-friendly (and compute-friendly) training and usage of LLMs. Fig.33:TimeComplexitycomparisonofRWKVwithdifferent 1) Optimized Training: There are many frameworks de- Transformers. Here T denotes the sequence length, d the veloped for optimized training of LLMs, here we introduce feature dimension, and c is MEGA’s chunk size of quadratic some of the prominent ones. attention. Courtesy of [141]. ZeRO: In [140], Rajbhandari et al. developed a novel solution, Zero Redundancy Optimizer (ZeRO), to optimize memory, vastly improving training speed of LLMs while 2) Low-Rank Adaption (LoRA): Low-Rank Adaptation is increasing the model size that can be efficiently trained. ZeRO a popular and lightweight training technique that significantly eliminates memory redundancies in data- and model-parallel reduces the number of trainable parameters, and is based training while retaining low communication volume and high on a crucial insight that the difference between the fine- computational granularity, allowing one to scale the model tuned weights for a specialized task and the initial pre-trained size proportional to the number of devices with sustained high weights often exhibits “low intrinsic rank” - meaning that efficiency. it can be approximated well by a low rank matrix [142]. Fig. 35: A generic knowledge distillation framework with Fig. 34: An illustration of LoRA reparametrizan. Only A and
ed size proportional to the number of devices with sustained high weights often exhibits “low intrinsic rank” - meaning that efficiency. it can be approximated well by a low rank matrix [142]. Fig. 35: A generic knowledge distillation framework with Fig. 34: An illustration of LoRA reparametrizan. Only A and student and teacher (Courtesy of [144]). B trained during this process. Courtesy of [142]. Knowledge can be transferred by different forms of learn- Training with LoRA is much faster, memory-efficient, and ing: response distillation, feature distillation, and API distilla- produces smaller model weights (a few hundred MBs), that are tion. Response distillation is concerned only with the outputs easier to store and share. One property of low-rank matrices of the teacher model and tries to teach the student model is that they can be represented as the product of two smaller how to exactly or at least similarly perform (in the sense of matrices. This realization leads to the hypothesis that this delta prediction) as the teacher. Feature distillation not only uses between fine-tuned weights and initial pre-trained weights can the last layer but also intermediate layers as well to create a be represented as the matrix product of two much smaller betterinnerrepresentationforthestudentmodel.Thishelpsthe matrices. By focusing on updating these two smaller matrices smaller model to have a similar representation as the teacher rather than the entire original weight matrix, computational model. efficiency can be substantially improved. API distillation is the process of using an API (typically Specifically, for a pre-trained weight matrix W 0 ∈ R d× k, from an LLM provider such as OpenAI) to train smaller LoRA constrains its update by representing the latter with models. In the case of LLMs, it is used to train the model a low-rank decomposition W 0 + ∆ W = W 0 + BA , where from the direct output of the larger model which makes it very B ∈ R d× r , A ∈ R r× k, and the rank r ≪ min (d,k ). During similar to response distillation. Many concerns are raised by training, W 0 is frozen and does not receive gradient updates, this type of distillation because in cases where the model itself while A and B contain trainable parameters. It is worth is not openly available, a (usually) paid API is exposed for end mentioning that both W 0 and ∆ W = BA are multiplied with users. On the other hand, while users pay for each call, how to the same input, and their respective output vectors are summed use the predictions is limited, for example, OpenAI prohibits coordinate-wise. For h = W 0x , their modified forward pass usage of its API to create LLMs that later will be used to yields: h = W 0x +∆ Wx = W 0x + BAx . Usually a random compete with it. The main value in such case is training data. Gaussian initialization is used for A , and zero initialization 4) Quantization: deep learning in its core, is a set of for B , so ∆ W = BA is zero at the beginning of training. mathematical functions applied to matrices, with a specific They then scale ∆ Wx by αr , where α is a constant in r. This precision for model weights. Reducing the precision of the reparametrization is ill
e main value in such case is training data. Gaussian initialization is used for A , and zero initialization 4) Quantization: deep learning in its core, is a set of for B , so ∆ W = BA is zero at the beginning of training. mathematical functions applied to matrices, with a specific They then scale ∆ Wx by αr , where α is a constant in r. This precision for model weights. Reducing the precision of the reparametrization is illustrated in Figure 34 weights can be used to reduce the size of the model and also make it faster. As an example, Float-32 operations compared It is worth mentioning that LoRA can be applied to any a to Int-8 operations are slower. This process, which is called subset of weight matrices in a neural network to reduce the quantization, can be applied in different phases. Main ap- number of trainable parameters. In the Transformer architec- proaches for model quantization can be categorized as: post ture,therearefourweightmatricesintheself-attentionmodule training quantization and quantization-aware training. Post- (W q , W k, W v , W o), and two in the MLP module. Most of training quantization is concerned with quantized trained mod- the time, LoRA is focused on adapting the attention weights els in two well-known methods: dynamic and static. Dynamic only for downstream tasks, and freezes the MLP modules, so post-training quantization computes the range of quantization they are not trained in downstream tasks both for simplicity on the runtime and is slower compared to static. Quantization- and parameter-efficiency. aware training adds quantization criteria into training, and 3) Knowledge Distillation: Knowledge distillation is the a quantized model is trained and optimized during training process of learning from a larger model [143]. Earlier days of process. This approach ensures that the end model will have best-performing models release have proven that this approach good performance and also does not need to be quantized after is very useful even if it is used in an API distillation approach. training. It is also referred to as an approach to distill the knowledge of IV. HOW LLMS ARE USED AND AUGMENTED not a single model but in fact multiple models into a smaller one. Creating smaller models by this approach yields smaller Once the LLMs are trained, we can use them to generate model sizes that can be used even on edge devices. Knowledge desired outputs for a variety of tasks. LLMs can be used distillation as shown in Fig 35, illustrates a general setup of directly through basic prompting. However, in order to exploit this training scheme. their full potential or to address some of the shortcomings,we need to augment the models through some external means. 1) IntrinsicHallucinations:Thesedirectlyconflictwith In this section we first provide a brief overview of the main the source material, introducing factual inaccuracies shortcoming of LLMs, with a deeper look at the issue of or logical inconsistencies. hallucination. We then describe how prompting and some aug- 2) Extrinsic Hallucinations: These, while not contra- mentation approaches can not only address those limitations dicting, are unverifiable against the source, encom- but also
he main the source material, introducing factual inaccuracies shortcoming of LLMs, with a deeper look at the issue of or logical inconsistencies. hallucination. We then describe how prompting and some aug- 2) Extrinsic Hallucinations: These, while not contra- mentation approaches can not only address those limitations dicting, are unverifiable against the source, encom- but also be used to augment the capabilities of LLMs going passing speculative or unconfirmable elements. as far as turning an LLM into a full-blown AI agent with the ability to interface with the external world. The definition of ’source’ in LLM contexts varies with the task. In dialogue-based tasks, it refers to ’world knowledge’, A. LLM limitations whereas in text summarization, it pertains to the input text itself. This distinction plays a crucial role in evaluating and ItisimportanttorememberthatLLMsaretrainedtopredict interpreting hallucinations. The impact of hallucinations is also a token. While fine-tuning and alignment improves their per- highly context-dependent. For instance, in creative endeavors formance and adds different dimensions to their abilities, there like poem writing, hallucinations might be deemed acceptable are still some important limitations that come up, particularly or even beneficial. if they are used naively. Some of them include the following: LLMs, trained on diverse datasets including the internet, • They don’t have state/memory. LLMs on their own books, and Wikipedia, generate text based on probabilistic cannot remember even what was sent to them in the models without an inherent understanding of truth or falsity. previous prompt. That is an important limitation for Recent advancements like instruct tuning and Reinforcement many of the uses cases that require some form of state. Learning from Human Feedback (RLHF) have attempted to steer LLMs towards more factual outputs, but the fundamental • They are stochastic/probabilistic. If you send the same probabilistic nature and its inherent limitations remain. A prompt to an LLM several times, you are likely to get recent study, “Sources of Hallucination by Large Language different responses. While there are parameters, and Models on Inference Tasks” [146], highlights two key aspects in particular the temperature, to limit the variability contributing to hallucinations in LLMs: the veracity prior and in the response, this is an inherent property of their the relative frequency heuristic, underscoring the complexities training that can create issues. inherent in LLM training and output generation. • They have stale information and, on their own, don’t Effective automated measurement of hallucinations in have access to external data. An LLM on its own does LLMs requires a combination of statistical and model-based not even know about the current time or day and does metrics. nothaveaccesstoanyinformationthatwasnotpresent Statistical Metrics: in its training set. • They are generally very large. This means that many
Effective automated measurement of hallucinations in have access to external data. An LLM on its own does LLMs requires a combination of statistical and model-based not even know about the current time or day and does metrics. nothaveaccesstoanyinformationthatwasnotpresent Statistical Metrics: in its training set. • They are generally very large. This means that many • Metrics like ROUGE [147] and BLEU [148] are com- costly GPU machines are needed for training and mon for assessing text similarity, focusing on intrinsic serving. In some cases, largest models have poor hallucinations. SLAs, particularly in terms of latency. • Advanced metrics such as PARENT [149], PARENT- • They hallucinate. LLMs do not have a notion of T [150], and Knowledge F1 [151] are utilized when ”truth” and they have usually been trained on a mix structured knowledge sources are available. These of good and bad content. They can produce very metrics, while effective, have limitations in capturing plausible but untruthful answers. syntactic and semantic nuances. While the previous limitations can all become important Model-Based Metrics: for some applications, it is worth for us to dive a bit into the • IE-Based Metrics: Utilize Information Extraction last one, hallucinations, since it has gathered a lot of interest models to simplify knowledge into relational tuples, over the past few months and it has also sparked many of the then compare these with the source. prompt approaches and LLM augmentation methods we will later describe. • QA-Based Metrics: Assess the overlap between gen- Hallucination: In the realm of Large Language Models erated content and the source through a question- (LLMs), the phenomenon of ”hallucinations” has garnered answering framework (see [152]). significant attention. Defined in the literature, notably in the • NLI-Based Metrics: Use Natural Language Inference ”Survey of Hallucination in Natural Language Generation” datasets to evaluate the truthfulness of a generated paper [145], hallucination in an LLM is characterized as hypothesis based on a given premise (see [153]). ”the generation of content that is nonsensical or unfaithful • Faithfulness Classification Metrics: Offer a refined to the provided source.” This terminology, although rooted in assessment by creating task-specific datasets for a psychological parlance, has been appropriated within the field nuanced evaluation (see [154]). of artificial intelligence. Hallucinations in LLMs can be broadly categorized into Despite advances in automated metrics, human judgment two types: remains a vital piece. It typically involves two methodologies: Fig. 36: How LLMs Are Used and Augmented. 1) Scoring: Human evaluators rate the level of halluci- Maintaining and analyzing a tracking set of hallucina- nation within a predefined scale. tions is essential for ongoing model improvement. 2)
remains a vital piece. It typically involves two methodologies: Fig. 36: How LLMs Are Used and Augmented. 1) Scoring: Human evaluators rate the level of halluci- Maintaining and analyzing a tracking set of hallucina- nation within a predefined scale. tions is essential for ongoing model improvement. 2) Comparative Analysis: Evaluators compare gener- • Prompt Engineering and Metaprompt Design. Many ated content against baseline or ground-truth refer- of the advanced prompt techniques described in IV-B ences, adding an essential layer of subjective assess- such as Retrieval Augmented Generation directly ad- ment. dress hallucination risks. FactScore [155] is a recent example of a metric that can be • Model Selection and Configuration for Hallucination used both for human and model-based evaluation. The metric Mitigation. For exemple, larger models with lower breaks an LLM generation into “atomic facts”. The final score temperature settings usually perform better. Also, is computed as the sum of the accuracy of each atomic fact, techniques such as RLHF or domain-sepcific fine- givingeachofthemequalweight.Accuracyisabinarynumber tuning can mitigate hallucination risks. that simply states whether the atomic fact is supported by the source. The authors implement different automation strategies that use LLMs to estimate this metric. B. Using LLMs: Prompt Design and Engineering Finally,mitigating hallucinationsinLLMs isamultifaceted A prompt in generative AI models is the textual input challenge, requiring tailored strategies to suit various applica- provided by users to guide the model’s output. This could tions. Those include: range from simple questions to detailed descriptions or specific • Product Design and User Interaction Strategies such tasks. Prompts generally consist of instructions, questions, as use case design, structuring the input/output, or input data, and examples. In practice, to elicit a desired providing mechanisms for user feedback. response from an AI model, a prompt must contain either instructions or questions, with other elements being optional. • Data Management and Continuous Improvement. Advanced prompts involve more complex structures, such as”chain of thought” prompting, where the model is guided to such examples of step by step reasoning by hand is hard and follow a logical reasoning process to arrive at an answer. error prone. That is where automatic CoT [157] comes into Prompt engineering is a rapidly evolving discipline that play. shapes the interactions and outputs of LLMs and other gen- 2) Tree of Thought (ToT): The Tree of Thought (ToT) erative AI models. The essence of prompt engineering lies in [158] prompting technique is inspired by the concept of crafting the optimal prompt to achieve a specific goal with considering various alternative solutions or thought processes a generative model. This process is not only about instructing before convergin
ions and outputs of LLMs and other gen- 2) Tree of Thought (ToT): The Tree of Thought (ToT) erative AI models. The essence of prompt engineering lies in [158] prompting technique is inspired by the concept of crafting the optimal prompt to achieve a specific goal with considering various alternative solutions or thought processes a generative model. This process is not only about instructing before converging on the most plausible one. ToT is based the model but also involves some understanding of the model’s on the idea of branching out into multiple ”thought trees” capabilities and limitations, and the context within which it where each branch represents a different line of reasoning. operates. This method allows the LLM to explore various possibilities Prompt engineering transcends the mere construction of and hypotheses, much like human cognitive processes where prompts; it requires a blend of domain knowledge, understand- multiple scenarios are considered before determining the most ing of the AI model, and a methodical approach to tailor likely one. prompts for different contexts. This might involve creating A critical aspect of ToT is the evaluation of these reasoning templates that can be programmatically modified based on a paths. As the LLM generates different branches of thought, given dataset or context. For example, generating personalized each is assessed for its validity and relevance to the query. responses based on user data might use a template that is This process involves real-time analysis and comparison of dynamically filled with relevant user information. the branches, leading to a selection of the most coherent and Furthermore, prompt engineering is an iterative and ex- logical outcome. ploratory process, akin to traditional machine learning prac- ToT is particularly useful in complex problem-solving tices such as model evaluation or hyperparameter tuning. The scenarios where a single line of reasoning might not suffice. rapid growth of this field suggests its potential to revolutionize It allows LLMs to mimic a more human-like problem-solving certain aspects of machine learning, moving beyond traditional approach, considering a range of possibilities before arriving methods like feature or architecture engineering. On the other at a conclusion. This technique enhances the model’s ability hand, traditional engineering practices such as version con- to handle ambiguity, complexity, and nuanced tasks, making it trol and regression testing need to be adapted to this new a valuable tool in advanced AI applications. paradigm just like they were adapted to other machine learning approaches [156]. 3) Self-Consistency: Self-Consistency [159] utilizes an In the following paragraphs we detail some of the most ensemble-based method, where the LLM is prompted to gen- interesting and popular prompt engineering approaches. erate multiple responses to the same query. The consistency among these responses serves as an indicator of their accuracy 1) Chain of Thought (CoT): The Chain of Thought (CoT) and reliability. technique, initially described in the paper “Chain-of-Thought The Self-Consistency approach is grounded in the principle Prompting Elicits Reasoning in Large Language Models”[34]
te multiple responses to the same query. The consistency among these responses serves as an indicator of their accuracy 1) Chain of Thought (CoT): The Chain of Thought (CoT) and reliability. technique, initially described in the paper “Chain-of-Thought The Self-Consistency approach is grounded in the principle Prompting Elicits Reasoning in Large Language Models”[34] that if an LLM generates multiple, similar responses to the by Google researchers, represents a pivotal advancement in same prompt, it is more likely that the response is accurate. prompt engineering for Large Language Models (LLMs). This method involves asking the LLM to tackle a query mul- This approach hinges on the understanding that LLMs, while tiple times, each time analyzing the response for consistency. proficient in token prediction, are not inherently designed for This technique is especially useful in scenarios where factual explicit reasoning. CoT addresses this by guiding the model accuracy and precision are paramount. through essential reasoning steps. CoT is based on making the implicit reasoning process of The consistency of responses can be measured using vari- LLMs explicit. By outlining the steps required for reasoning, ous methods. One common approach is to analyze the overlap the model is directed closer to a logical and reasoned output, in the content of the responses. Other methods may include especially in scenarios demanding more than simple informa- comparing the semantic similarity of responses or employing tion retrieval or pattern recognition. more sophisticated techniques like BERT-scores or n-gram overlaps. These measures help in quantifying the level of CoT prompting manifests in two primary forms: agreement among the responses generated by the LLM. 1) Zero-Shot CoT: This form involves instructing the Self-Consistency has significant applications in fields LLM to “think step by step”, prompting it to de- where the veracity of information is critical. It is particularly construct the problem and articulate each stage of relevant in scenarios like fact-checking, where ensuring the reasoning. accuracy of information provided by AI models is essential. 2) Manual CoT: A more complex variant, it requires By employing this technique, prompt engineers can enhance providing step-by-step reasoning examples as tem- the trustworthiness of LLMs, making them more reliable for plates for the model. While yielding more effective tasks that require high levels of factual accuracy. results, it poses challenges in scalability and mainte- 4) Reflection: Reflection [160] involves prompting LLMs nance. to assess and potentially revise their own outputs based on Manual CoT is more effective than zero-shot. However, reasoning about the correctness and coherence of their re- the effectiveness of this example-based CoT depends on the sponses. The concept of Reflection centers on the ability of choice of diverse examples, and constructing prompts with LLMs to engage in a form of self-evaluation. After generatingan initial response, the model is prompted to reflect on its 8) Autom
oT is more effective than zero-shot. However, reasoning about the correctness and coherence of their re- the effectiveness of this example-based CoT depends on the sponses. The concept of Reflection centers on the ability of choice of diverse examples, and constructing prompts with LLMs to engage in a form of self-evaluation. After generatingan initial response, the model is prompted to reflect on its 8) Automatic Prompt Engineering (APE): Automatic own output, considering factors like factual accuracy, logical Prompt Engineering (APE) [163] focuses on automating the consistency, and relevance. This introspective process can lead process of prompt creation for Large Language Models to the generation of revised or improved responses. (LLMs). APE seeks to streamline and optimize the prompt A key aspect of Reflection is the LLM’s capacity for designprocess,leveragingthecapabilitiesofLLMsthemselves self-editing. By evaluating its initial response, the model can to generate and evaluate prompts. APE involves using LLMs identify potential errors or areas of improvement. This iterative in a self-referential manner where the model is employed process of generation, reflection, and revision enables the LLM to generate, score, and refine prompts. This recursive use of to refine its output, enhancing the overall quality and reliability LLMs enables the creation of high-quality prompts that are of its responses. more likely to elicit the desired response or outcome. 5) ExpertPrompting: ExpertPrompting[161]enhancesthe The methodology of APE can be broken down into several capabilities of Large Language Models (LLMs) by simulating key steps: the responses of experts in various fields. This method involves • Prompt Generation: The LLM generates a range of prompting the LLMs to assume the role of an expert and re- potential prompts based on a given task or objective. spond accordingly, providing high-quality, informed answers. A key strategy within Expert Prompting is the multi-expert • Prompt Scoring: Each generated prompt is then approach. The LLM is prompted to consider responses from evaluated for its effectiveness, often using criteria multiple expert perspectives, which are then synthesized to like clarity, specificity, and likelihood of eliciting the form a comprehensive and well-rounded answer. This tech- desired response. nique not only enhances the depth of the response but also • Refinement and Iteration: Based on these evalua- incorporates a range of viewpoints, reflecting a more holistic tions,promptscanberefinedanditeratedupon,further understanding of the subject matter. enhancing their quality and effectiveness. 6) Chains: Chains refer to the method of linking multiple components in a sequence to handle complex tasks with Large C. Augmenting LLMs through external knowledge - RAG Language Models (LLMs). This approach involves creating a series of interconnected steps or processes, each contributing One of the main limitations of pre-trained LLMs is their to the final outcome. The concept of Chains is based on lack of up-to-date knowledge or access to private or use- the idea of constructing a workflow where different stages case-specific information. This is where retrieval augmented or components are sequentially arranged. Each component in
olves creating a series of interconnected steps or processes, each contributing One of the main limitations of pre-trained LLMs is their to the final outcome. The concept of Chains is based on lack of up-to-date knowledge or access to private or use- the idea of constructing a workflow where different stages case-specific information. This is where retrieval augmented or components are sequentially arranged. Each component in generation (RAG) comes into the picture [164]. RAG, illus- a Chain performs a specific function, and the output of one trated in figure 37, involves extracting a query from the input serves as the input for the next. This end-to-end arrangement prompt and using that query to retrieve relevant information allows for more complex and nuanced processing, as each from an external knowledge source (e.g. a search engine or a stage can be tailored to handle a specific aspect of the task. knowledge graph, see figure 38 ). The relevant information is Chains can vary in complexity and structure, depending on then added to the original prompt and fed to the LLM in order the requirements. In “PromptChainer: Chaining Large Lan- for the model to generate the final response. A RAG system guage Model Prompts through Visual Programming” [162], includes three important components: Retrieval, Generation, the authors not only describe the main challenges in designing Augmentation [165]. chains, but also describe a visual tool to support those tasks. a) RAG-aware prompting techniques: Because of the 7) Rails: Rails in advanced prompt engineering refer to importance of RAG to build advanced LLM systems, several a method of guiding and controlling the output of Large RAG-aware prompting techniques have been developed re- Language Models (LLMs) through predefined rules or tem- cently.OnesuchtechniqueisForward-lookingActiveRetrieval plates. This approach is designed to ensure that the model’s Augmented Generation (FLARE) responses adhere to certain standards or criteria, enhancing the Forward-looking Active Retrieval Augmented Generation relevance, safety, and accuracy of the output. The concept of (FLARE) [168] enhances the capabilities of Large Language Rails involves setting up a framework or a set of guidelines Models (LLMs) by iteratively combining prediction and in- that the LLM must follow while generating responses. These formation retrieval. FLARE represents an evolution in the guidelines are typically defined using a modeling language or use of retrieval-augmented generation, aimed at improving the templates known as Canonical Forms, which standardize the accuracy and relevance of LLM responses. way natural language sentences are structured and delivered. Rails can be designed for various purposes, depending on FLARE involves an iterative process where the LLM the specific needs of the application: actively predicts upcoming content and uses these predictions as queries to retrieve relevant information. This method con- • Topical Rails: Ensure that the LLM sticks to a trastswithtraditionalretrieval-augmentedmodelsthattypically particular topic or domain. retrieve information once and then proceed with generation. In • Fact-Checking Rails: Aimed at minimizing the gen- FLARE, this process is dynamic and ongoing throughout the eration of false or
s queries to retrieve relevant information. This method con- • Topical Rails: Ensure that the LLM sticks to a trastswithtraditionalretrieval-augmentedmodelsthattypically particular topic or domain. retrieve information once and then proceed with generation. In • Fact-Checking Rails: Aimed at minimizing the gen- FLARE, this process is dynamic and ongoing throughout the eration of false or misleading information. generation phase. In FLARE, each sentence or segment gener- ated by the LLM is evaluated for confidence. If the confidence • Jailbreaking Rails: Prevent the LLM from generating level is below a certain threshold, the model uses the generated responses that attempt to bypass its own operational content as a query to retrieve relevant information, which is constraints or guidelines. then used to regenerate or refine the sentence. This iterative Fig. 37: An example of synthesizing RAG with LLMs for question answering application [166]. examples, the LLM decides to call an external Q&A tool, a calculator, and a Wikipedia Search Engine More recently, researchers at Berkeley have trained a new LLM called Gorilla [67] that beats GPT-4 at the use of APIs, a specific but quite general tool. a) Tool-aware prompting techniques: Similarly to what was described with RAG, several tool-aware prompting ap- proaches have been developed to make usage of tools more scalable.ApopulartechniqueisthesocalledAutomaticMulti- step Reasoning and Tool-use (ART). Fig. 38: This is one example of synthesizing the KG as a Automatic Multi-step Reasoning and Tool-use (ART) [170] retriever with LLMs [167]. is a prompt engineering technique that combines automated chain of thought prompting with the use of external tools. ART represents a convergence of multiple prompt engineering process ensures that each part of the response is informed by strategies, enhancing the ability of Large Language Models the most relevant and current information available. (LLMs) to handle complex tasks that require both reasoning and interaction with external data sources or tools. FormoredetailsonRAGframeworkanditsrelevantworks, ART involves a systematic approach where, given a task we refer the readers to this survey of retrieval augmented and input, the system first identifies similar tasks from a task generations [165]. library. These tasks are then used as examples in the prompt, D. Using External Tools
or tools. FormoredetailsonRAGframeworkanditsrelevantworks, ART involves a systematic approach where, given a task we refer the readers to this survey of retrieval augmented and input, the system first identifies similar tasks from a task generations [165]. library. These tasks are then used as examples in the prompt, D. Using External Tools guiding the LLM on how to approach and execute the current task. This method is particularly effective when tasks require a Retrieving information from an external knowledge source combination of internal reasoning and external data processing asdescribedaboveisonlyoneofthepotentialwaystoaugment or retrieval. an LLM. More generally, an LLM can access any number of external tools (e.g. an API to a service) to augment its E. LLM Agents functionality. In that regards, RAG can be seen as a specific The idea of AI agents has been well-explored in the history instance of the broader category of the so called ”tools”. of AI. An agent is typically an autonomous entity that can Tools in this context are external functions or services that perceive the environment using its sensors, make a judgment LLMs can utilize. These tools extend the range of tasks an based on the state it currently is, and accordingly act based on LLM can perform, from basic information retrieval to complex the actions that are available to it. interactions with external databases or APIs. In the context of LLMs, an agent refers to a system based In the paper ”Toolformer: Language Models Can Teach on a specialized instantiation of an (augmented) LLM that Themselves to Use Tools” [169], the authors go beyond simple is capable of performing specific tasks autonomously. These tool usage by training an LLM to decide what tool to use agents are designed to interact with users and environment to when, and even what parameters the API needs. Tools include make decisions based on the input and the intended goal of two different search engines, or a calculator. In the following the interaction. Agents are based on LLMs equipped with theability to access and use tools, and to make decisions based on or uncertain, allowing the LLM-based agent to maintain a high the given input. They are designed to handle tasks that require level of performance and reliability. a degree of autonomy and decision-making, typically beyond Reason and Act (ReAct)[176] prompts LLMs to generate simple response generation. not only verbal reasoning but also actionable steps, thus The functionalities of a generic LLM-based agent include: enhancing the model’s dynamic problem-solving capabilities. ReAct is grounded in the principle of integrating reasoning • Tool Access and Utilization: Agents have the capabil- with action. In this approach, the LLM is prompted to alternate ity to access external tools and services, and to utilize between generating reasoning traces (explanations) and taking these resources effectively to accomplish tasks. actions (steps or commands) in an interleaved manner. This • Decision Making: They can make decisions based on approachallowsthemodeltodynamicallyreasonaboutaprob- the input, context, and the tools available to them, lem, and pr
ernate ity to access external tools and services, and to utilize between generating reasoning traces (explanations) and taking these resources effectively to accomplish tasks. actions (steps or commands) in an interleaved manner. This • Decision Making: They can make decisions based on approachallowsthemodeltodynamicallyreasonaboutaprob- the input, context, and the tools available to them, lem, and propose and take concrete actions simultaneously. often employing complex reasoning processes. Dialog-Enabled Resolving Agents (DERA) [177] are spe- cialized AI agents that can engage in dialogue, resolve queries, As an example, an LLM that has access to a function (or and make decisions based on interactive exchanges. DERA an API) such as weather API, can answer any question related is developed based on the idea of utilizing multiple agents to the weather of the specific place. In other words, it can use within a dialog context, each with specific roles and functions. APIs to solve problems. Furthermore, if that LLM has access These agents can include Researchers, who gather and analyze to an API that allows to make purchases, a purchasing agent information, and Deciders, who make final judgments based can be built to not only have capabilities to read information on the information provided. This division of roles allows for from the external world, but also act on it [171]. a well-organized and efficient approach to problem-solving Fig. 40 shows another example of LLM-based agents for and decision-making. DERA is particularly advantageous in conversational information seeking [36], where an LLM is scenarios requiring complex decision-making and problem- augmented with a set of plug-and-play modules, including solving, such as those in medical diagnostics or customer ser- a working memory that tracks the dialog state, a policy that vice. The collaborative and interactive nature of DERA agents makes an execution plan for the task and selects next system allows them to handle intricate queries with a level of depth action, an action executor that performs an action selected by and nuance that single-agent systems might struggle with. the policy (consolidating evidence from external knowledge, Moreover, this approach aligns well with human decision- or prompting the LLM to generate responses), and a utility making processes, making AI reasoning more relatable and that accesses the alignment of the LLM’s responses with user trustworthy. expectations or specific business requirements, and generate V. POPULAR DATASETS FOR LLMS feedback to improve agent performance. FormoredetailsonLLM-basedAIagentsseerecentsurvey Large language models exhibit promising accomplish- [172], [173], [174]. ments, but the main question that arises is how effectively they function and how their performance can be assessed in a) Prompt engineering techniques for agents: Like specific tasks or applications. RAG and Tools, prompt engineering techniques that specif- The evaluation of LLMs poses particular challenges due ically address the needs of LLM-based agents have been to the evolving landscape of their applications. The original developed. Three such examples are Reasoning without Ob-
they function and how their performance can be assessed in a) Prompt engineering techniques for agents: Like specific tasks or applications. RAG and Tools, prompt engineering techniques that specif- The evaluation of LLMs poses particular challenges due ically address the needs of LLM-based agents have been to the evolving landscape of their applications. The original developed. Three such examples are Reasoning without Ob- intent behind developing LLMs was to boost the performance servation (ReWOO), Reason and Act (ReAct), and Dialog- of NLP tasks such as translation, summarization, question- Enabled Resolving Agents (DERA). answering, and so on [178]. However, it is evident today Reasoning without Observation (ReWOO) [175] aims to that these models are finding utility across diverse domains decouplereasoningfromdirectobservations.ReWOOoperates including code generation and finance. Moreover, the eval- byenablingLLMstoformulatecomprehensivereasoningplans uation of LLMs encompasses several critical considerations or meta-plans without immediate reliance on external data such as fairness and bias, fact-checking, and reasoning. In or tools. This approach allows the agent to create a struc- this section, we outline the commonly used benchmarks for tured framework for reasoning that can be executed once the assessing LLMs. These benchmarks are categorized based on necessary data or observations are available. In ReWOO, the training or evaluating the LLM Capabilities. LLM initially develops a plan (a series of steps) that outlines A. Datasets for Basic Tasks: language model- how to approach and solve a given problem. This meta- ing/understanding/generation planning phase is crucial as it sets the stage for the agent to process information once it becomes available. The execution This section provides an overview of the benchmarks and phase then involves integrating actual data or observations into datasets suited to evaluate the basic abilities of LLMs. the pre-specified plan, leading to coherent and contextually relevant responses. ReWOO offers significant advantages in • Natural Questions [179] is a QA dataset that consists terms of token efficiency and robustness to tool failure. It of real anonymized, aggregated queries submitted to enables LLMs to handle tasks where immediate access to the Google search engine as questions. An annotator external data is not available, relying instead on a well- is presented with a question along with a Wikipedia structured reasoning framework. This method is particularly page from the top 5 search results, and annotates a advantageous in scenarios where data retrieval is costly, slow, longanswer(typicallyaparagraph)andashortanswer Fig. 39: HuggingGPT: An agent-based approach to use tools and planning [image courtesy of [171]] task description, a code solution, and three automated test cases. • HumanEval [182] is a dataset for code generation task. This dataset consists of 164 hand-crafted pro-
iption, a code solution, and three automated test cases. • HumanEval [182] is a dataset for code generation task. This dataset consists of 164 hand-crafted pro- gramming challenges. Each challenge is accompanied byafunctionsignature,docstring,codebody,andmul- tiple unit tests. The main intuition behind developing thisdatasetistoguaranteetheexclusionofitscontents from training datasets for code generation models. • APPS [183] is designed for code generation task focusing on the Python programming language. The APPS dataset contains a collection of232 ,444 Python programs. Each program in the dataset has an average Fig. 40: A LLM-based agent for conversational information of 18 lines of Python code. Additionally, APPS offers seeking. Courtesy of [36]. access to a repository of 10 ,000 unique programming exercises, each with text-based problem descriptions. The final aspect to highlight is that the it includes test cases. (one or more entities) if present on the page, or marks • WikiSQL[184]iscraftedforcodegenerationtaskand null if no long/short answer is present. it has 87,726 carefully labeled pairs of SQL queries • MMLU [180] is intended to evaluate the knowl- and corresponding natural language questions from edge gained in zero-shot and few-shot scenarios. That Wikipedia tables. The SQL queries comprise three means that MMLU assesses both the general knowl- subsets: test sets (17 ,284 examples), development edge and problem-solving ability of a model. It covers (9,145 examples), and training (61 ,297 examples). 57 subjects in STEM, humanities, social sciences, • TriviaQA [185] is designed for QA task. This and other areas. The benchmark varies in complexity, dataset comprises more than 650 ,000 question- ranging from elementary to advanced professional. answer-evidence triples. There are 95 ,000 question- It is worth mentioning that the main contribution of answerpairsinthisdataset,eachauthoredbytriviaen- this dataset is for multi-task language understanding, thusiasts and supported by an average of six indepen- question answering, and arithmetic reasoning.
anging from elementary to advanced professional. answer-evidence triples. There are 95 ,000 question- It is worth mentioning that the main contribution of answerpairsinthisdataset,eachauthoredbytriviaen- this dataset is for multi-task language understanding, thusiasts and supported by an average of six indepen- question answering, and arithmetic reasoning. dently sourced evidence documents. These documents • MBPP [181] stands for “Mostly Basic Python Prob- are automatically acquired from Wikipedia or broader lems” and provides a benchmark for evaluating the web search results. The dataset is categorized into performance of models designed for code generation. two segments, including those with authentic answers The benchmark encompasses 974 short Python pro- from Wikipedia and web domains, and verified sets grams including a wide range of topics, including embody the accurately answered questions along with fundamental programming concepts and standard li- their associated documents from both Wikipedia and brary usage, and more. Each challenge comprises a online. Fig. 41: Dataset applications. • RACE [186] suits for reading comprehension task. is the synthesis of RACE-M and RACE-H. This dataset is based on English tests completed by Chinese students from middle school and high school, • SQuAD [187] stands for “Stanford Question Answer- aged 12 to 18 , and it contains roughly 28 ,000 texts ing Dataset” and is a crowdsourced reading compre- and 100 ,000 questions rigorously prepared by human hension dataset based on Wikipedia articles. It has specialists, primarily English instructors. This dataset approximately 100 ,000 question-answer pairs con- contains a wide range of subjects that were purpose- nected to more than 500 articles. The answers to fully chosen to assess students’ comprehension and these questions are typically text fragments or spans reasoning abilities. This dataset is available in three taken from the corresponding reading passages. The subgroups: RACE-M, RACE-H, and RACE. RACE- questions may be unanswerable in some cases. The M refers to the middle school examinations, whereas dataset is divided into three sets: an 80% training set, RACE-H denotes the high school tests. Finally, RACE a 10% development set, and a 10% hidden test set. Fig. 42: Datasets licensed under different licenses. • BoolQ [188] is a yes/no question-answering dataset • GSM8K [190] is designed to evaluate the model’s where the goal is reading comprehension task. BoolQ abilityformulti-stepmathematicalreasoning.GSM8K includes 15 ,942 examples. Each example is a triplet includes 8.5K linguistically diverse grade school math that includes a question, a relevant paragraph, and word problems written by humans. The dataset is split the solution. Although the main intuition behind into two sets: a training set with 7.5K problems, this dataset is for reading comprehension
lreasoning.GSM8K includes 15 ,942 examples. Each example is a triplet includes 8.5K linguistically diverse grade school math that includes a question, a relevant paragraph, and word problems written by humans. The dataset is split the solution. Although the main intuition behind into two sets: a training set with 7.5K problems, this dataset is for reading comprehension, it can be and a test set with 1K problems. These problems used for reasoning, natural language inference, and need 2 to 8 steps to be solved. Solutions mainly question-answering tasks. are a series of elementary calculations using basic • MultiRC [189] is another dataset that fits reading arithmetic operations. comprehension task. MultiRC contains brief para- • MATH [191] enables to assess how well models can graphs as well as multi-sentence questions that can solve math problems. MATH dataset hast 12 , 500 be answered using the information in the paragraph. problems from high school math competitions. Each The paragraphs in this dataset come from a variety problem in the dataset has a step-by-step solution and of sources, including news, fiction, historical texts, a final answer enclosed in a box. The problems cover Wikipedia articles, discussions on society and law, a wide range of topics and have different levels of elementary school science textbooks, and 9/11 re- complexity. There are seven subjects in total. Further- ports. Each question has many response choices, with more, the difficulty of each problem is rated based one or more of them being correct. Answering the on the AoPS standards on a scale from ′1′ to ′5′. A questions requires reasoning across several sentences. ′1′ shows the easiest problems in a subject, while ′5′ MultiRC dataset encompasses around 6,000 multi- represents the most difficult. In terms of formatting, sentencequestionsgatheredfromover800paragraphs. all problems and solutions are presented using LATEX On average, each question offers about two valid and the Asymptote vector graphics language. answer alternatives out of a total of five. • HellaSwag [192] is designed to assess commonsense reasoning in LLMs. This benchmark includes 70 ,000 B. Datasets for Emergent: ICL, reasoning (CoT), instruction multiple-choice questions. Each question is derived following from one of two domains: ActivityNet or WikiHow, and presents four answer choices regarding what This section centers on the benchmarks and datasets em- might happen in the following situation. The correct ployed to evaluate the emergent abilities of LLMs. answer provides an actual statement describing the upcoming event, but the three wrong answers are C. Datasets for Augmented: using external knowledge/tools created to confuse machines.
our answer choices regarding what This section centers on the benchmarks and datasets em- might happen in the following situation. The correct ployed to evaluate the emergent abilities of LLMs. answer provides an actual statement describing the upcoming event, but the three wrong answers are C. Datasets for Augmented: using external knowledge/tools created to confuse machines. This section focuses on datasets designed for the aug- • AI2 Reasoning Challenge (ARC) [193] is used mented abilities of LLMs. for commonsense reasoning. This benchmark encom- • HotpotQA [198] is designed to cover a diverse and passes 7,787 science examination questions. These explainable question-answering dataset that necessi- questions are in English, and most of them are set tates multi-hop reasoning. This dataset is derived from up in a multiple-choice format. The questions have the English Wikipedia. It consists of roughly 113 ,000 been divided into two groups: a Challenge Set with questions. Each question in the dataset comes with 2,590 difficult questions and an Easy Set with 5,197 two paragraphs, called gold paragraphs, from two questions. Each collection has also been pre-divided Wikipedia articles. Also, there is a list of sentences into Train, Development, and Test subsets. in those paragraphs that crowdworkers have picked as • PIQA [194] is intended to evaluate the language important for answering the question. representations on their knowledge of physical com- • ToolQA [199] is a question answering benchmark monsense. In this dataset, the focus is on everyday to evaluate LLMs’ ability to use external tools for situations with a preference for uncommon solutions. answering questions. The central task is a multiple-choice question answer- • GPT4Tools serves as an instructional dataset, gener- ing, where a question (q) is provided along with two ated by instructing advanced teachers (such as Chat- potential solutions (s1,s2) . Then, the best solution is GPT), with instructions conditioned on visual content chosen by whether a model or a human. For each and tool descriptions. This process results in the question, only one of the solutions is the correct generation of instructions related to the use of tools. answer. There are three versions of this dataset. The first • SIQA[195]providesaframeworkforevaluatingmod- version comprises 71,000 instruction-following data els’ ability for commonsense reasoning about social points utilized to fine-tune the GPT4Tools model. The situations. SIQA dataset has 38 ,000 multiple-choice next version consists of manually cleaned instruction questions designed to assess emotional and social data used for validation, covering instructions related intelligence in everyday circumstances. This dataset to the tools from the first version. The last version is covers a wide variety of social scenarios. In SIQA, cleaned instruction data used for testing and includes the potential answer
next version consists of manually cleaned instruction questions designed to assess emotional and social data used for validation, covering instructions related intelligence in everyday circumstances. This dataset to the tools from the first version. The last version is covers a wide variety of social scenarios. In SIQA, cleaned instruction data used for testing and includes the potential answers is a mixture of human-selected instructions related to some tools that are not present responses and machine-generated ones that have been in the first version. filtered through adversarial processes. VI. PROMINENT LLMS’ PERFORMANCE ON • OpenBookQA (OBQA) [196] is a new kind of BENCHMARKS question-answering dataset where answering its ques- In this section we first provide an overview of some of tions requires additional common and commonsense popular metrics used for evaluating the performance of LLMs knowledge not contained in the book and rich text under different scenarios. We then look at the performance comprehension. This dataset includes around 6,000 of prominent large language models on some of the popular multiple-choice questions. Each question is linked to datasets and benchmarks. one core fact, as well as an additional collection of over 6000 facts. The questions were developed A. Popular Metrics for Evaluating LLMs using a multi-stage crowdsourcing and expert filter- ing procedure. OpenBookQA questions are difficult Evaluating the performance of generative language models because they need multi-hop reasoning with limited depends on the underlying task they are going to be used for. background. Tasks that are mostly about selecting a choice out of given • TruthfulQA [197] is designed specifically to eval- ones (such as sentiment analysis), can be seen as simple as uate the truthfulness of language models in gen- classification and their performance can be evaluated using erating answers to questions. This dataset includes classification metrics. Metrics such as accuracy, precision, 817 questions, written by authors, from 38 different recall, F1, etc are applicable in this case. It is also important to categories, including health, law, finance, and politics. note that the answers generated by the model for specific tasks These questions are purposefully designed to chal- suchasmulti-choicequestionansweringarealwayseitherTrue lengehumanresponders,astheymaycontaincommon or False. If the answer is not in a set of options, it can be seen misunderstandings that lead to incorrect answers. as False as well. However,sometasksthatarepurelyopen-endedtextgener- • OPT-IML Bench [103] is a comprehensive bench- ationcannotbeevaluatedinthesamewayasforcategorization. mark for Instruction Meta-Learning. It covers 2000 Different metrics are required for the specific purpose of the NLP tasks from 8 existing benchmarks. The OPT-IML evaluation. Code generation is a very different case in open- Bench consists of a training set with 17.9 M examples, ended generative evaluations. The generated code must pass a dev set wi
- ationcannotbeevaluatedinthesamewayasforcategorization. mark for Instruction Meta-Learning. It covers 2000 Different metrics are required for the specific purpose of the NLP tasks from 8 existing benchmarks. The OPT-IML evaluation. Code generation is a very different case in open- Bench consists of a training set with 17.9 M examples, ended generative evaluations. The generated code must pass a dev set with 145K samples, and a test set with 321K the test suite but on the other hand, it is also important samples. to understand if a model is capable of generating different TABLE II: LLM Datasets Overview. Benchmark Name Evaluation Metric Leaderboard Source paperswithcode HumanEval PASS@k Link Link Link MBPP PASS@k, Accuracy - Link Link APPS PASS@k, Accuracy - Link Link WikiSQL Accuracy - Link Link CoNaLa BLEU Link Link CodeParrot PASS@k - Link - HellaSwag Accuracy Link Link Link AI2 ReasoningAccuracy Link Link Link Challenge (ARC) BoolQ Accuracy - Link Link MultiRC F1-score, Accuracy - Link Link CNN/Daily Mail [200] Accuracy - Link - SQuAD F1-score, EM Link Link Link RACE Accuracy - Link Link CNN/Daily Mail [201] ROUGE - Link Link Drop F1-score, EM Link Link Link QuAC F1-score, HEQ-Q, HEQ-D Link Link Link TriviaQA EM, F1-score, Accuracy Link Link Link Natural Questions EM, F1-score, Accuracy Link Link Link StrategyQA Accuracy, Recall@10, SARI Link Link Link CoQA F1-score Link Link Link XSum ROUGE - Link Link SAMSum ROUGE -
StrategyQA Accuracy, Recall@10, SARI Link Link Link CoQA F1-score Link Link Link XSum ROUGE - Link Link SAMSum ROUGE - - Link WikiSum ROUGE - Link - DialogSum ROUGE - Link Link TruthfulQA MC1 , MC2, % true, % info, BLEURT Link Link Link MMLU Accuracy Link Link Link GSM8K Accuracy Link Link Link PIQA Accuracy Link Link Link SIQA Accuracy Link Link Link OpenBookQA (OBQA) Accuracy Link Link Link HotpotQA EM, F1-score, Joint EM, Joint F1-score, Link Link Link MATH Accuracy - Link Link CommonsenseQA Accuracy Link Link Link Natural Instructions ROUGE-L, Human Link Link Link BIG-bench Accuracy, Average - Link Link ToolTalk Successrate,Precision,Recall,Incorrect - Link Link action rate, Percent of failing error types MetaTool Accuracy, Precision, Recall, F1-score - Link Link Successful Rate of Thought, Successful GPT4Tools Rate of Action, Successful Rate of Ar- - Link Link guments, Success Rate Correctness, ROUGE, Error(API Hallu- API-Bank cination, Has Exception, Invalid Input- Link Link Parameters, False API Call Format, API Call, Miss Input Parameters) Alpaca-CoT - - Link Link solutions as a code, what is the probability of selecting the correct one among them. Pass@k is a very good metric in this EM = M case. It works in this manner that given a problem, different N (5) solutions as code are generated. They are tested for correctness Human equivalence score (HEQ) on the other hand, is an using diffe
s the probability of selecting the correct one among them. Pass@k is a very good metric in this EM = M case. It works in this manner that given a problem, different N (5) solutions as code are generated. They are tested for correctness Human equivalence score (HEQ) on the other hand, is an using different functionality tests. Afterward, from generated alternative to F1 score [203]. HEQ-Q represents the precision n solutions, and the respective c number of them being correct equation 4 provides the final value. of individual questions, wherein an answer is deemed correct if the model’s F1 score surpasses the average human F1 score. Likewise, HEQ-D denotes the precision of each dialogue; it is " n− c # deemed accurate when all questions within the dialogue meet pass@k := E 1 − nk (4) the criteria of HEQ [182]. Problems k Evaluation of other generative tasks such as machine trans- lation are based on metrics such as Rouge and BLEU. These scores work well when there is a reference text as ground Exact match (EM) is another metric that is mostly con- truth (such as translation) and a hypothesis that is generated cerned with exact matches from (pre-defined) answers. It by the generative model, in our case the LLM. These scores counts a prediction as correct if it exactly matches one of are mostly used for cases where the goal is to detect the more than one desired reference text token by token. In some similarity of the answer and ground truth in a computation cases, it can be the same as accuracy and the equation 5 shows manner. In a computation manner, it meant that nothing more the mathematical definition. Here M is total number of correct thanN-Gramswouldbeused.However,metricssuchasBERT- answers and N is the total number of questions [202]. Score are also good for these cases but they are also heavily TABLE III: LLM categories and respective definitions. Classification Category Description Small Number of parameters≤ 1B Size Medium 1B< Number of parameters≤ 10B Large 10B< Number of parameters≤ 100B Very Large 100B< Number of parameters Foundation model Pretrained language model Type Instruction model Pretrained and instruction fine-tuned language model Chat model Pretrained, instruction fine-tuned, and chat fine-tuned language model Origin Original model An original model released with either Foundation, Instruction, or Chat model Tuned model Fine-tuned version of an origi
language model Type Instruction model Pretrained and instruction fine-tuned language model Chat model Pretrained, instruction fine-tuned, and chat fine-tuned language model Origin Original model An original model released with either Foundation, Instruction, or Chat model Tuned model Fine-tuned version of an original model Availability Publicly available Model and weights are available due to request to without request Publicly unavailable Model and weights are not publicly available TABLE IV: Different LLM categorization. Model Size #Params (B) Type Availability Origin Davinci-002 Very Large 175 Instruction Unavailable Tuned Davinci-003 Very Large 175 Instruction Unavailable Tuned GPT 3.5-turbo Large 20 Chat Unavailable Tuned Falcon 7B Medium 7 Foundation Public Original Alpaca Large 13 Chat Public Tuned Pythia 7B Medium 7 Foundation Public Original Pythia 12B Large 12 Foundation Public Original LLAMA 7B Medium 7 Chat Public Original LLAMA 2 7B Medium 7 Chat Public Tuned LLAMA 2 7B Medium 7 Foundation Public Original Vicuna 13B Large 13 Foundation Public Tuned Vicuna 7B Medium 7 Foundation Public Tuned Claude Large 93 Chat Unavailable Original Claude 2 Very Large 137 Chat Unavailable Original erroneous because another model is used to judge. Still, even we use is their primary use case. We consider each LLM to today, evaluating purely generated content is very hard and be either: Foundation model (pretrained language model with no completely fitting metric is not found, metrics are either no instruction fine-tuning and chat fine-tuning), Instruction looking for simplistic features such as N-Gram, SkipGram, model (pretrained language model with only instruction fine- etc,ortheyaremodelswithunknownaccuracyandpreciseness tuning), and Chat model (pretrained language model with [204]. instruction and chat fine-tuning). Apart from all the catego- Generative evaluation metrics are also another type of eval- rization described, another category is required to distinguish uation metric for LLMs that use another LLM for evaluating between original models and tuned ones. Original models are the answer. However, depending on the task itself, evaluation those that have been released as a foundation model or a fine- can be possible in this way or not.
art from all the catego- Generative evaluation metrics are also another type of eval- rization described, another category is required to distinguish uation metric for LLMs that use another LLM for evaluating between original models and tuned ones. Original models are the answer. However, depending on the task itself, evaluation those that have been released as a foundation model or a fine- can be possible in this way or not. Another dependency tuned one. Tuned models are those that grasped the original that makes generative evaluation error-prone is reliance on model and tuned it with different datasets or even different the prompt itself. RAGAS is one of the good examples that training approaches. It is also good to note that original models incorporate the usage of generative evaluation. are usually foundation models that have been fine-tuned on specific datasets or even different approaches. Availability of Various benchmarks and leaderboards have been proposed the model weights regardless of the license is another category to address the most challenging question in the world of in our classification. Models that have their weights publicly large language models: Which one is better? However not available (even through request) are noted as Public models a simple answer can address this question. The answer de- while others are noted as Private. Table III shows all of these pends on various aspects of large language models. Section V definitions and abbreviations used in the rest of the article. shows the categorical presentation of different tasks and the Figure 43 illustrate these visually. most important datasets in each category. We will follow the According to the provided categorizations, we can catego- same categorization and provide a comparison based on each rize and label each notable LLM as shown in table IV. As can category. After providing comparison for each category, we be seen from this table, models categorized as very large are will provide a broad overview of aggregated performance by also unavailable as well. averaging the reported performance metric on different tasks. Evaluating different LLMs can be seen also from different B. LLMs’ Performance on Different Tasks perspectives. For example, a LLM with a drastically fewer Commonsense reasoning is one of the important capabili- number of parameters is not completely comparable to one ties each model can obtain. This capability denotes the ability with a larger number of parameters. From this perspective, we of the model to use prior knowledge in combination with will categorize LLMs in four categories as well: small (less reasoning skills. In the case of HellaSwag for example, finding than or equal to 1 billion parameters), medium (between 1 and the continuation of text is challenging because the given text 10 billion), large (between 10 and 100 billion), and very large contains a partial part of the story while the given choices (more than 100 billion). Another classification for the LLMs as continuation are tricky to select, and without having prior Fig. 43: LLM categorizations. knowledge about the world it is not possible. This specific kind From the results presented in Table V it is cl
rge (between 10 and 100 billion), and very large contains a partial part of the story while the given choices (more than 100 billion). Another classification for the LLMs as continuation are tricky to select, and without having prior Fig. 43: LLM categorizations. knowledge about the world it is not possible. This specific kind From the results presented in Table V it is clear that GPT-4 of reasoning deserves high attention because it is related to achieves best results for HellaSwag while Davinci-003 is best utilizing previous knowledge with open text-described scenes model for OBQA. It is also good to note that results for OBQA or facts. As can be seen from table V not just Unavailable are not reported for all of the models and possibly davinci-003 models but also Public ones can achieve good results on is not the best model achieving highest results on OBQA. various tests. TABLE V: Commonsense reasoning comparison. Not all models report their performance on all datasets, and Model OBQA HellaSwag because of that, the number of models for which performance Davinci-003 51 83.4 is reported in different tables varies. Falcon 7B 44.4 76.3 Alpaca 43.4 73.9 Pythia 7B 37.2 64 Pythia 12B 43.2 68.1 LLAMA 7B 42.4 73 Dolly 6B 41.2 67.6 Dolly 12B 40.4 71 TABLE VI: Symbolic reasoning comparison. Alpaca 7B 43.4 73.9 Alpaca Lora 7B 42.6 74 Model Cobjects Penguins GPT-J 6.7B 38.2 66.2 GPT-NeoX 26 33.56 LLama 7B 42.4 73 OPT 66B 31.2 28.08 LLama 13B 42.2 76.2 Bloomberg GPT 34.8 37.67 Pythia 6.7B 37.2 64 BLOOM 176B 36.8 40.41 Pythia 12B 38 67.3 PaLM 540B 38 44.5 StableLM Tuned 33.4 53.6 Gopher-280B 49.2 40.6 Koala 13B 42.8 72.6 Chinchilla-70B 59.7 48.7 Mosaic mpt-7B 42.6 76.3 PaLM 2 61.2 65.8 LLAMA 2 70B - 87.33 LLAMA 65B - 86.09 Falcon 40B - 85.3 Falcon 180B - 88.86 MPT Instruct 30B - 84.31 MPT Instruct 7B - 77.91 World knowledge is mostly about general knowledge ques- Yi 6B - 76.42 tions, for example, in Wikifact dataset questions such as ”Who Yi 34B - 85.69 GPT-4 - 95.3
Falcon 180B - 88.86 MPT Instruct 30B - 84.31 MPT Instruct 7B - 77.91 World knowledge is mostly about general knowledge ques- Yi 6B - 76.42 tions, for example, in Wikifact dataset questions such as ”Who Yi 34B - 85.69 GPT-4 - 95.3 is the author of a specific well-known book” can be found and Gemini Ultra - 87.8 references are also provided. Table VII shows the results. TABLE VII: World knowledge comparison. TABLE IX: Arithmetic reasoning comparison. Model TriviaQA NaturalQ WebQ ARC Model GSM8k MATH BLOOM - - - 32.9 Gemini Ultra 94.4 53.2 BLOOM 176B - - - 50.85 GPT-4 87.1 42.5 Bloomberg GPT - - - 48.63 Gemini Pro 86.5 32.6 Chinchilla - 35.5 - - ToRA 70B 84.3 49.7 Codex + REPLUG 76.8 44.7 - - MathCoder-L-70B 83.9 - GAL 120B - - - 67.9 MetaMath 70B 82.3 26 GLaM 62B/64E 75.8 32.5 15.5 50.3 MuggleMATH 70B 82.3 - Gopher - 28.2 - - MathCoder-CL-34B 81.7 45.2 GPT-3 175B 71.2 29.9 41.5 85.2 ToRA-Code 34B 80.7 50.8 GPT-4 - - - 96.4 MetaMath-Mistral-7B 77.7 - GPT-NeoX - - - 45.39 Arithmo2-Mistral-7B 76.4 - LLaMA 13B - - - 52.7 ToRA-Code 13B 75.8 48.1 LLaMA 2 70B 85 33 - - Arithmo-Mistral-7B 74.7 - LLaMA 33B - 24.9 - 57.8 MathCoder-CL-13B 74.1 35.9 LLaMA 65B 72.6 39.9 - - MuggleMATH 13B 74 - LLaMA 7B - - - 47.6 CodeT5+ 73.8 - Mistral 7B 69.9 28.8 - 55.5 KwaiYiiMath 13B 73.3 - Neo-6B - 13.7 - - ToRA-Code 7B 72.6 44.6 OPT - - - 31.1 MathCoder-L-13B 72.6 29.9 OPT 66B - - - 44.54 MetaMath 13B 71 22.5 OPT-175B - -
Neo-6B - 13.7 - - ToRA-Code 7B 72.6 44.6 OPT - - - 31.1 MathCoder-L-13B 72.6 29.9 OPT 66B - - - 44.54 MetaMath 13B 71 22.5 OPT-175B - - - 43.94 LLaMA 65B 69.7 10.6 OPT-175B - - - 25.6 MuggleMATH 7B 68.4 - PaLM 2-L 86.1 37.5 28.2 95.1 MathCoder-CL-7B 67.8 23.3 PaLM 2-M 81.7 32 26.9 64.9 MetaMath 7B 66.4 19.4 PaLM 2-S 75.2 25.3 21.8 59.6 RFT 70B 64.8 - PaLM-540B 81.4 39.6 43.5 87.1 MathCoder-L-7B 64.2 - phi-1.5-web 1.3B - - - 44.9 Orca 2-13B 59.14 - SparseGPT - - - 38.99 U-PaLM 58.5 - SparseGPT - - - 39.85 PaLM-540B 58.1 8.8 SparseGPT - - - 41.3 LLaMA 2 70B 56.8 - RFT 13B 55.3 - LLaMA 33B 53.1 7.1 Mistral 7B 52.2 13.1 RFT 7B 51.2 - Forsomespecificuse-casemodels,itishighlydemandedto LLaMA 65B 50.9 20.5 have coding and code-generation capability. Table VIII shows Orca 2-7B 47.23 - the results of different models on coding capability. Text-davinci-002 40.7 19.1 LLaMA 33B 35.6 3.9 GPT-Neo-2.7B 19.5 - LLaMA 7B 18.1 2.9 TABLE VIII: Coding capability comparison. PaLM 540B 17.9 8.8 LLaMA 13B 17.8 3.9 Model HumanEval LLaMA 7B 11 2.9 Gemini Ultra 74.4 GPT-Neo-125M 7.5 - Gemini Pro 67.7
8.8 LLaMA 13B 17.8 3.9 Model HumanEval LLaMA 7B 11 2.9 Gemini Ultra 74.4 GPT-Neo-125M 7.5 - Gemini Pro 67.7 PaLM 8B 4.1 1.5 GPT-4 67 GPT-2 - 5.4 WizardCoder 15B 57.3 GPT-3 175B - 5.2 phi-1 1.3B 50.6 PaLM 62B - 4.4 Code Llama 48.8 GPT-3-13B - 3 GPT-3.5 48.1 LLaMA 7B 11 2.9 OctoCoder 46.2 PaLM 8B - 1.5 phi-1-small 45 PaLM 2-S 37.6 InstructCodeT5+ 16B 35 Large language models in some cases are hallucinating an- Mistral 7B 30.5 swers simply because they are next-token prediction machines. LLaMA 2 29.9 Hallucination is one of the important factors in measuring phi-1-base 29 Codex-12B 28.81 how much a large language model is trustworthy and reliable. PaLM 540B 26.2 Measuring hallucination on the other hand is also not easy as it CodeT5+ 2B 24.2 seems because each fact can be written in different styles and LLaMA 65B 23.7 even the smallest changes in writing make it hard to detect. LLaMA 33B 21.7 PaLM 62B 15.9 It is fair to assume if any particular LLM is more capable LLaMA 13B 15.8 to detect hallucination of false information in text, it is also LaMDA 137B 14 more trustworthy. HaluEval is one of the datasets that aims to MIM-350M 13.7 LLaMA 7B 10.5 measurehallucinationinthisfield[205].Evaluationcanalsobe PaLM 8B 3.6 performed by another model judging the response with regard to the actual answer [206]. Table X shows the evaluation of different models based on these datasets. Arithmetic reasoning is another challenging reasoning ca- VII. CHALLENGES AND FUTURE DIRECTIONS pability to achieve. GSM8K for example contains grade school mathematical questions with respect to t
to the actual answer [206]. Table X shows the evaluation of different models based on these datasets. Arithmetic reasoning is another challenging reasoning ca- VII. CHALLENGES AND FUTURE DIRECTIONS pability to achieve. GSM8K for example contains grade school mathematical questions with respect to their answers. Table IX As we have seen in the previous sections, large language provides an insight for different model comparisons. models have achieved impressive results in the past 1-2 years. TABLE X: Hallucination evaluation Model HHEM HaluEval QA HaluEval Dialogue HaluEval Sum. HaluEval General GPT 4 97 - - - - GPT 4 Turbo 97 - - - - GPT 3.5 Turbo 96.5 62.59 72.4 58.53 79.44 Davinci002 - 60.05 60.81 47.77 80.42 Davinci003 - 49.65 68.37 48.07 80.4 GPT-3 - 49.21 50.02 51.23 72.72 Google Gemini Pro 95.2 - - - - Llama 2 70B 94.9 - - - - Llama 2 7B 94.4 49.6 43.99 49.55 20.46 Llama 2 13B 94.1 - - - - Cohere-Chat 92.5 - - - - Cohere 91.5 - - - - Claude 2 91.5 69.78 64.73 57.75 75 Claude 1 67.6 64.83 53.76 73.88 Microsoft Phi 2 91.5 - - - - Google Palm 2 (beta) 91.4 - - - - Mixtral 8x7B 90.7 - - - - Amazon Titan Express 90.6 - - - - Mistral 7B 90.6 - - - - Google Palm 2 Chat (beta) 90 - - - - Google Palm 2 87.9 - - - - Google Palm 2 Chat 72.8 - - - - ChatGLM - 47.93 44.41 48.57 30.92 Falcon - 39.66 29.08 42.71 18.98 Vicuna - 60.34 46.35 45.62 19.48
- Google Palm 2 Chat 72.8 - - - - ChatGLM - 47.93 44.41 48.57 30.92 Falcon - 39.66 29.08 42.71 18.98 Vicuna - 60.34 46.35 45.62 19.48 Alpaca - 6.68 17.55 20.63 9.54 At the same time this is still a new and extremely active GRU, seq2seq, but Transformers have been the dominant research area where the pace of innovation is increasing rather approach since its inception. As described earlier, attention is thanslowingdown.Asinanyotherevolvingareathough,there the main mechanism driving transformers. More recently, there are still numerous challenges ahead. Here we briefly mention has been promising research in alternative approaches that are some of the challenges and main active areas which are known being labelled as post-attention. so far. It is worth noting that LLM challenges are discussed An important class of such class of post-attention models in details in a work by Kaddour et al. [207]. are the so called State Space Models (SSMs). While the notion A. Smaller and more efficient Language Models of State Space Models has a long history in machine learning, it should be noted that in the context of language models, SSM This is a survey on large language models, and there is usually used in reference to the newer Structure State Space has been an initial push towards ”larger is better” that has Model architecture or S4 for short (see Gu et al. [29]). Some clearly been rewarded with ever larger models like GPT- recent models in this category are Mamba [30], Hyena [210], 4 getting better accuracy and performance in benchmarks. and Striped Hyena [211]. However, those large models are costly and inefficient in While all of those models are very competitive in terms of several dimensions (e.g. high latency). In response to all of performance in leaderboards and efficiency, they also address this, there is a current research trend to come up with Small an important challenge in more traditional attention-based Language Models (SLMs) as a cost-effective alternative to architectures: the lack of support for larger context windows. LLMs, particularly when used on specific tasks that might not require the full generality of larger models. Prominent works Having a good answer to many prompts requires context. in this direction include Phi-1 [208], Phi-1.5 [209], and Phi-2 For example, the response to ”Recommend some good movies from Microsoft. for me” requires a lot of context about ”me” as well as what More generally, we should expect many research efforts in movies are available and which ones I have not watched. this area of how to train smaller and more efficient models. Context length is especially important for RAG, where large Techniques such as parameter-efficient fine-tuning (PEFT), portions of text might be retrieved and injected into the prompt teacher/student, and other forms of distillation – see section for generation (see
ct many research efforts in movies are available and which ones I have not watched. this area of how to train smaller and more efficient models. Context length is especially important for RAG, where large Techniques such as parameter-efficient fine-tuning (PEFT), portions of text might be retrieved and injected into the prompt teacher/student, and other forms of distillation – see section for generation (see section IV-C. III-I – will continue to be used to build a smaller model out The longer the context length, the more tokens we can of larger ones. squeeze into the context. The more information the model has access to, the better its response will be. But on the other B. New Post-attention Architectural Paradigms hand, with very long context, it would be hard for the model Transformerblockshavebeenacrucialandconstantpartof to remember everything and efficiently process all the informa- most of current LLM frameworks, and it’s a big question mark tion. Attention-based models are highly inefficient for longer how much longer this architecture will be in vogue, and what contexts and that is why we should expect more research in will be the next big architectural break-through in the field of different mechanisms that enable processing longer contexts deeplearning(andNLP).SinceAlexNetin2012,wehaveseen and generally come up with more efficient architectures. many architectures go in and out of fashion, including LSTM, That being said, new architectures might not only proposealternatives for the attention mechanism but rather rethink the LLM-based systems are already starting to replace ma- whole Transformer architecture. As an early example of this, chine learning systems that were until recently using other Monarch Mixer [212] proposes a new architecture that uses approaches. As a clear example of this, LLMs are now being the same sub-quadratic primitive that achieves high hardware deployed to better understand people preference and interests, efficiency on GPUs – Monarch matrices – along both sequence and provide more personalized interactions, whether in cus- length and model dimension. tomer service, content recommendation, or other applications. On the other end of the spectrum, it is worth mentioning This involves better understanding of user preferences, and that there are some attention-compatible architectural mecha- analyzing their past interactions and using them as the context. nisms that have been recently gaining steam and proving their We will continue to see research in the application and usage value in creating better and more powerful LLMs. Probably of LLMs for not only personalization and recommendations, the best example of such mechanism is Mixture of Experts but many other application areas using other machine learning (MoE). MoEs have been around in machine learning for years, techniques. even before the Deep Learning Era [213], but they have been Finally, another important area of research we expect to gaining popularity since then, and particularly in the context gather increased attention is that of LLM-based agents and of Transformer models and LLMs. multi-agent systems [172], [173], [174]. The development of In LLMs, MoEs allow t
achine learning for years, techniques. even before the Deep Learning Era [213], but they have been Finally, another important area of research we expect to gaining popularity since then, and particularly in the context gather increased attention is that of LLM-based agents and of Transformer models and LLMs. multi-agent systems [172], [173], [174]. The development of In LLMs, MoEs allow to train an extremely large model LLM systems with access to external tools and decision- than is then only partially instantiated during inference making capabilities is both exciting and challenging. We will when some of the experts are turned off wherever the gat- see continued research and progress in this important area that ing/weighting function has a low weight assigned to them. As some argue could lead to Artificial General Intelligence (AGI). an example, the GLaM model has 1.2 trillion parameters, but during inference only 2 out of the 64 experts are used [84]. E. Security and Ethical/Responsible AI MoEs are nowadays an important component of the so- Ensuring the robustness and security of LLMs against called frontier LLMs (i.e. the most advanced and capable adversarial attacks and other vulnerabilities is a critical area models). GPT-4 itself is rumored to be based on a MoE of research [219]. As LLMs are increasingly deployed in real- architecture, and some of the best performing LLMs such as world applications, they need to be protected from potential Mixtral [117], are basically an MoE version of pre-existing threats, to prevent them being used to manipulate people or LLMs. spread mis-information. Finally, it is important to note that MoEs can be used as a Addressing ethical concerns and biases in LLMs is another component of any architecture regardless of whether it is based active area of research. Efforts are being made to ensure that on attention or not. In fact, MoEs have also been applied to LLMs are fair, unbiased, and capable of handling sensitive SSM-based LLMs like Mamba citepioro2024moemamba. We information responsibly. As LLMs are being used more and should continue to see MoE-driven improvements in the future more by a large number of people on a daily basis, making regardless of the underlying architecture. sure they are unbiased and behave responsibly is crucial. C. Multi-modal Models VIII. CONCLUSION Future LLMs are expected to be multi-modal and handle a variety of data types, such as text, images, and videos, This paper present a survey of LLMs developed in the audio, in a unified manner. This opens up possibilities for past few years. We first provide an overview of early pre- more diverse applications in fields like question answering, trained language models (e.g., as BERT), then review three content generation, creative arts, and healthcare, robotics, and popular LLM families (GPT, LLaMA, PaLM), and other beyond. There are already several prominent multi-modal representative LLMs. We then survey methods and techniques LLMs out there, including: LLAVA [214], LLAVA-Plus [215], of building, augmenting, and using LLMs. We review popular GPT-4 [33], Qwen-vl [116], Next-GPT [216], but the trend is LLM datasets and benchmarks, and compare performance of ex
robotics, and popular LLM families (GPT, LLaMA, PaLM), and other beyond. There are already several prominent multi-modal representative LLMs. We then survey methods and techniques LLMs out there, including: LLAVA [214], LLAVA-Plus [215], of building, augmenting, and using LLMs. We review popular GPT-4 [33], Qwen-vl [116], Next-GPT [216], but the trend is LLM datasets and benchmarks, and compare performance of expected to be continued. Evaluation of these models also is a a set of prominent models on public benchmarks. Finally, we new research topic, especially conversational generative vision present open challenges and future research directions. models [217]. Multi-modal LLMs can unlock huge potentials in a variety of tasks, and there has already been a descent REFERENCES progress in this direction, which needs a dedicated paper to discuss all its details. [1]J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws D. Improved LLM Usage and Augmentation techniques for neural language models,” arXiv preprint arXiv:2001.08361, 2020. [2]J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, As we described in sectionIV, many of the shortcomings E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark and limitations of LLMs such as hallucination can be ad- et al., “Training compute-optimal large language models,” arXiv dressed through advanced prompt engineering, use of tools, preprint arXiv:2203.15556, 2022. or other augmentation techniques. We should expect not only [3]C.E.Shannon,“Predictionandentropyofprintedenglish,” Bellsystem technical journal, vol. 30, no. 1, pp. 50–64, 1951. continued, but accelerated research in this area. It is worth [4]F. Jelinek, Statistical methods for speech recognition. MIT press, mentioning that, in the specific case of software engineering, 1998. some works ([218]) tried to automatically eliminate this issue [5]C. Manning and H. Schutze, Foundations of statistical natural lan- from the overall software engineering workflow guage processing. MIT press, 1999. [6]C. D. Manning, An introduction to information retrieval. Cambridge models for natural language processing: A survey,” Science China university press, 2009. Technological Sciences, vol. 63, no. 10, pp. 1872–1897, 2020. [7]W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, [29]A. Gu, K. Goel, and C. R ´e, “Efficiently modeling long sequences with B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language structured state spaces,” 2022. models,” arXiv preprint arXiv:2303.18223, 2023. [30]A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with [8]C. Zhou, Q. Li, C. Li, J. Yu, Y. Liu, G. Wang, K. Zhang, C. Ji, Q. Yan, selective state spaces,” arXiv preprint arXiv:2312.00752, 2023. L. He et al., “A comprehensive survey on pretrained foundation mod- [31]A. Chowdhery, S.
structured state spaces,” 2022. models,” arXiv preprint arXiv:2303.18223, 2023. [30]A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with [8]C. Zhou, Q. Li, C. Li, J. Yu, Y. Liu, G. Wang, K. Zhang, C. Ji, Q. Yan, selective state spaces,” arXiv preprint arXiv:2312.00752, 2023. L. He et al., “A comprehensive survey on pretrained foundation mod- [31]A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, els: A history from bert to chatgpt,” arXiv preprint arXiv:2302.09419, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., 2023. “Palm: Scaling language modeling with pathways,” arXiv preprint [9]P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre- arXiv:2204.02311, 2022. train, prompt, and predict: A systematic survey of prompting methods [32]H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, in natural language processing,” ACM Computing Surveys, vol. 55, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar et al., “Llama: no. 9, pp. 1–35, 2023. Open and efficient foundation language models,” arXiv preprint [10]Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, arXiv:2302.13971, 2023. J. Xu, and Z. Sui, “A survey for in-context learning,” arXiv preprint [33]OpenAI, “GPT-4 Technical Report,” https://arxiv.org/pdf/2303. arXiv:2301.00234, 2022. 08774v3.pdf, 2023. [11]J. Huang and K. C.-C. Chang, “Towards reasoning in large language [34]J. Wei, X. Wang, D. Schuurmans, M. Bosma, b. ichter, models: A survey,” arXiv preprint arXiv:2212.10403, 2022. F. Xia, E. Chi, Q. V. Le, and D. Zhou, “Chain-of-thought [12]S. F. Chen and J. Goodman, “An empirical study of smoothing prompting elicits reasoning in large language models,” in techniques for language modeling,” Computer Speech & Language, Advances in Neural Information Processing Systems, S. Koyejo, vol. 13, no. 4, pp. 359–394, 1999. S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, [13]Y. Bengio, R. Ducharme, and P. Vincent, “A neural probabilistic Eds., vol. 35. Curran Associates, Inc., 2022, pp. 24824–24837. language model,” Advances in neural information processing systems, [Online]. Available: https://proceedings.neurips.cc/paper files/paper/ vol. 13, 2000. 2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf [14]H. Schwenk, D. D ´echelotte, and J.-L. Gauvain, “Continuous space [35]G. Mialon, R. Dess `ı, M. Lomeli, C. Nalmpantis, R. Pasunuru, language models for statistical machine translation,” in Proceedings R. Raileanu, B. Rozi`ere, T. Schick, J. Dwivedi-Yu, A. Celikyil- of the COLING/ACL 2006 Main Conference Poster Sessions, 2006, maz et al., “Augmented language models: a survey,” arXiv preprint pp. 723–730.
uous space [35]G. Mialon, R. Dess `ı, M. Lomeli, C. Nalmpantis, R. Pasunuru, language models for statistical machine translation,” in Proceedings R. Raileanu, B. Rozi`ere, T. Schick, J. Dwivedi-Yu, A. Celikyil- of the COLING/ACL 2006 Main Conference Poster Sessions, 2006, maz et al., “Augmented language models: a survey,” arXiv preprint pp. 723–730. arXiv:2302.07842, 2023. [15]T. Mikolov, M. Karafi ´at, L. Burget, J. Cernock`y, and S. Khudanpur, [36]B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, “Recurrent neural network based language model.” in Interspeech, L. Liden, Z. Yu, W. Chen, and J. Gao, “Check your facts and try vol. 2, no. 3. Makuhari, 2010, pp. 1045–1048. again: Improving large language models with external knowledge and [16]A. Graves, “Generating sequences with recurrent neural networks,” automated feedback,” arXiv preprint arXiv:2302.12813, 2023. arXiv preprint arXiv:1308.0850, 2013. [37]S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, [17]P.-S.Huang,X.He,J.Gao,L.Deng,A.Acero,andL.Heck,“Learning “React: Synergizing reasoning and acting in language models,” arXiv deep structured semantic models for web search using clickthrough preprint arXiv:2210.03629, 2022. data,” in Proceedings of the 22nd ACM international conference on [38]D. E. Rumelhart, G. E. Hinton, R. J. Williams et al., “Learning internal Information & Knowledge Management, 2013, pp. 2333–2338. representations by error propagation,” 1985. [18]J. Gao, C. Xiong, P. Bennett, and N. Craswell, Neural Approaches to [39]J. L. Elman, “Finding structure in time,” Cognitive science, vol. 14, Conversational Information Retrieval. Springer Nature, 2023, vol. 44. no. 2, pp. 179–211, 1990. [19]I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning [40]M. V. Mahoney, “Fast text compression with neural networks.” in with neural networks,” Advances in neural information processing FLAIRS conference, 2000, pp. 230–234. systems, vol. 27, 2014. [41]T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. ˇCernock`y, “Strate- [20]K. Cho, B. Van Merri ¨enboer, D. Bahdanau, and Y. Bengio, “On gies for training large scale neural network language models,” in 2011 the properties of neural machine translation: Encoder-decoder ap- IEEE Workshop on Automatic Speech Recognition & Understanding. proaches,” arXiv preprint arXiv:1409.1259, 2014. IEEE, 2011, pp. 196–201. [21]H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Doll ´ar, [42]tmikolov. rnnlm. [Online]. Available: https://www.fit.vutbr.cz/ J. Gao, X. He, M. Mitchell, J. C. Platt et al., “From captions to ∼ imikolov/rnnlm/ visual concepts and back,” in Proceedings of the IEEE conference [43]S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, on computer vision and pattern recognition, 2015, pp. 1473–1482. and
[42]tmikolov. rnnlm. [Online]. Available: https://www.fit.vutbr.cz/ J. Gao, X. He, M. Mitchell, J. C. Platt et al., “From captions to ∼ imikolov/rnnlm/ visual concepts and back,” in Proceedings of the IEEE conference [43]S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, on computer vision and pattern recognition, 2015, pp. 1473–1482. and J. Gao, “Deep learning–based text classification: a comprehensive [22]O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: review,” ACM computing surveys (CSUR), vol. 54, no. 3, pp. 1–40, A neural image caption generator,” in Proceedings of the IEEE 2021. conference on computer vision and pattern recognition, 2015, pp. [44]A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. 3156–3164. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” [23]M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, Advances in neural information processing systems, vol. 30, 2017. and L. Zettlemoyer, “Deep contextualized word representations. corr [45]Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, abs/1802.05365 (2018),” arXiv preprint arXiv:1802.05365, 2018. “Albert: A lite bert for self-supervised learning of language represen- [24]J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training tations,” arXiv preprint arXiv:1909.11942, 2019. of deep bidirectional transformers for language understanding,” arXiv [46]K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre- preprint arXiv:1810.04805, 2018. training text encoders as discriminators rather than generators,” arXiv [25]Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, preprint arXiv:2003.10555, 2020. L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert [47]G. Lample and A. Conneau, “Cross-lingual language model pretrain- pretraining approach,” arXiv preprint arXiv:1907.11692, 2019. ing,” arXiv preprint arXiv:1901.07291, 2019. [26]P. He, X. Liu, J. Gao, and W. Chen, “Deberta: Decoding-enhanced bert [48]Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and with disentangled attention,” arXiv preprint arXiv:2006.03654, 2020. Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language [27]X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, understanding,” Advances in neural information processing systems, A. Zhang, L. Zhang et al., “Pre-trained models: Past, present and vol. 32, 2019. future,” AI Open, vol. 2, pp. 225–250, 2021. [49]L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, [28]X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, “Pre-trained M. Zhou, and H.-W. Hon, “Unified language model pre-training for natural language understanding and generation,” Advances in neural [70]Y. Wang, H. Ivison, P. Dasigi, J. Hessel, T. Khot, K. R. Chandu, information processing systems, vol. 32, 2019. D.Wadden,K.MacMillan,N.A.Smith,I.
, Y. Wang, J. Gao, [28]X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, “Pre-trained M. Zhou, and H.-W. Hon, “Unified language model pre-training for natural language understanding and generation,” Advances in neural [70]Y. Wang, H. Ivison, P. Dasigi, J. Hessel, T. Khot, K. R. Chandu, information processing systems, vol. 32, 2019. D.Wadden,K.MacMillan,N.A.Smith,I.Beltagyetal.,“Howfarcan [50]A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improv- camelsgo?exploringthestateofinstructiontuningonopenresources,” ing language understanding by generative pre-training,” 2018. arXiv preprint arXiv:2306.04751, 2023. [51]A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., [71]S. Tworkowski, K. Staniszewski, M. Pacek, Y. Wu, H. Michalewski, “Language models are unsupervised multitask learners,” OpenAI blog, and P. Miło´s, “Focused transformer: Contrastive training for context vol. 1, no. 8, p. 9, 2019. scaling,” arXiv preprint arXiv:2307.03170, 2023. [52]C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, [72]D. Mahan, R. Carlow, L. Castricato, N. Cooper, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning and C. Laforte, “Stable beluga models.” [Online]. with a unified text-to-text transformer,” The Journal of Machine Available: [https://huggingface.co/stabilityai/StableBeluga2](https:// Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020. huggingface.co/stabilityai/StableBeluga2) [53]L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, [73]Y. Tay, J. Wei, H. W. Chung, V. Q. Tran, D. R. So, S. Shakeri, X. Gar- A. Barua, and C. Raffel, “mt5: A massively multilingual pre-trained cia, H. S. Zheng, J. Rao, A. Chowdhery et al., “Transcending scaling text-to-text transformer,” arXiv preprint arXiv:2010.11934, 2020. laws with 0.1% extra compute,” arXiv preprint arXiv:2210.11399, [54]K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mass: Masked 2022. sequence to sequence pre-training for language generation,” arXiv [74]H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, preprint arXiv:1905.02450, 2019. Y. Li, X. Wang, M. Dehghani, S. Brahma et al., “Scaling instruction- [55]M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, finetuned language models,” arXiv preprint arXiv:2210.11416, 2022. V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to- [75]R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, sequence pre-training for natural language generation, translation, and S. Shakeri, E. Taropa, P. Bailey, Z. Chen et al., “Palm 2 technical comprehension,” arXiv preprint arXiv:1910.13461, 2019. report,” arXiv preprint arXiv:2305.10403, 2023. [56]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, [76]K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod- N. Scales, A. Tanwani, H. Cole-Lewi
y, Z. Chen et al., “Palm 2 technical comprehension,” arXiv preprint arXiv:1910.13461, 2019. report,” arXiv preprint arXiv:2305.10403, 2023. [56]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, [76]K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod- N. Scales, A. Tanwani, H. Cole-Lewis, S. Pfohl et al., “Large language els are few-shot learners,” Advances in neural information processing models encode clinical knowledge,” arXiv preprint arXiv:2212.13138, systems, vol. 33, pp. 1877–1901, 2020. 2022. [57]M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Ka- [77]K. Singhal, T. Tu, J. Gottweis, R. Sayres, E. Wulczyn, L. Hou, plan, H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., K. Clark, S. Pfohl, H. Cole-Lewis, D. Neal et al., “Towards expert- “Evaluating large language models trained on code,” arXiv preprint level medical question answering with large language models,” arXiv arXiv:2107.03374, 2021. preprint arXiv:2305.09617, 2023. [58]R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, [78]J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, C. Hesse, S. Jain, V. Kosaraju, W. Saunders et al., “Webgpt: Browser- A. M. Dai, and Q. V. Le, “Finetuned language models are zero-shot assisted question-answering with human feedback,” arXiv preprint learners,” arXiv preprint arXiv:2109.01652, 2021. arXiv:2112.09332, 2021. [79]J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, [59]L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, J.Aslanides,S.Henderson,R.Ring,S.Youngetal.,“Scalinglanguage C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models: Methods, analysis & insights from training gopher,” arXiv models to follow instructions with human feedback,” Advances in preprint arXiv:2112.11446, 2021. Neural Information Processing Systems, vol. 35, pp. 27730–27744, [80]V. Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, 2022. A. Chaffin, A. Stiegler, T. L. Scao, A. Raja et al., “Multi- [60]OpenAI. (2022) Introducing chatgpt. [Online]. Available: https: task prompted training enables zero-shot task generalization,” arXiv //openai.com/blog/chatgpt preprint arXiv:2110.08207, 2021. [61]H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, [81]Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang, J. Liu, X. Chen, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama Y. Zhao, Y. Lu et al., “Ernie 3.0: Large-scale knowledge enhanced pre- 2: Open foundation and fine-tuned chat models,” arXiv preprint training for language understanding and generation,” arXiv preprint arXiv:2307.09288, 2023. arXiv:2107.02137, 2021. [62]R.Taori,I.Gulrajani,T.Zhang,Y.Dubois
S. Batra, P. Bhargava, S. Bhosale et al., “Llama Y. Zhao, Y. Lu et al., “Ernie 3.0: Large-scale knowledge enhanced pre- 2: Open foundation and fine-tuned chat models,” arXiv preprint training for language understanding and generation,” arXiv preprint arXiv:2307.09288, 2023. arXiv:2107.02137, 2021. [62]R.Taori,I.Gulrajani,T.Zhang,Y.Dubois,X.Li,C.Guestrin,P.Liang, [82]S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Mil- and T. B. Hashimoto, “Alpaca: A strong, replicable instruction- lican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark following model,” Stanford Center for Research on Foundation Mod- et al., “Improving language models by retrieving from trillions of els. https://crfm. stanford. edu/2023/03/13/alpaca. html, vol. 3, no. 6, tokens,” in International conference on machine learning. PMLR, p. 7, 2023. 2022, pp. 2206–2240. [63]T.Dettmers,A.Pagnoni,A.Holtzman,andL.Zettlemoyer,“Qlora:Ef- [83]O. Lieber, O. Sharir, B. Lenz, and Y. Shoham, “Jurassic-1: Technical ficient finetuning of quantized llms,” arXiv preprint arXiv:2305.14314, details and evaluation,” White Paper. AI21 Labs, vol. 1, p. 9, 2021. 2023. [84]N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, [64]X. Geng, A. Gudibande, H. Liu, E. Wallace, P. Abbeel, S. Levine, Y. Zhou, A. W. Yu, O. Firat et al., “Glam: Efficient scaling of and D. Song, “Koala: A dialogue model for academic research,” Blog languagemodelswithmixture-of-experts,”in InternationalConference post, April, vol. 1, 2023. on Machine Learning. PMLR, 2022, pp. 5547–5569. [65]A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, [85]R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.- D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier et al., T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du et al., “Lamda: Language “Mistral 7b,” arXiv preprint arXiv:2310.06825, 2023. models for dialog applications,” arXiv preprint arXiv:2201.08239, 2022. [66]B.Roziere,J.Gehring,F.Gloeckle,S.Sootla,I.Gat,X.E.Tan,Y.Adi, [86]S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, J.Liu,T.Remez,J.Rapinetal.,“Codellama:Openfoundationmodels C. Dewan, M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained for code,” arXiv preprint arXiv:2308.12950, 2023. transformerlanguagemodels,” arXivpreprintarXiv:2205.01068,2022. [67]S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, “Gorilla: Large [87]R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Sar- language model connected with massive apis,” 2023. avia, A. Poulton, V. Kerkez, and R. Stojnic, “Galactica: A large [68]A. Pal, D. Karkhanis, M. Roberts, S. Dooley, A. Sundararajan, and language model for science,” arXiv preprint arXiv:2211.09085, 2022.
, and J. E. Gonzalez, “Gorilla: Large [87]R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Sar- language model connected with massive apis,” 2023. avia, A. Poulton, V. Kerkez, and R. Stojnic, “Galactica: A large [68]A. Pal, D. Karkhanis, M. Roberts, S. Dooley, A. Sundararajan, and language model for science,” arXiv preprint arXiv:2211.09085, 2022. S. Naidu, “Giraffe: Adventures in expanding context lengths in llms,” [88]E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, arXiv preprint arXiv:2308.10882, 2023. S. Savarese, and C. Xiong, “Codegen: An open large language [69]B. Huang, “Vigogne: French instruction-following and chat models,” model for code with multi-turn program synthesis,” arXiv preprint https://github.com/bofenghuang/vigogne, 2023. arXiv:2203.13474, 2022. [89]S. Soltan, S. Ananthakrishnan, J. FitzGerald, R. Gupta, W. Hamza, [110]A. Mitra, L. D. Corro, S. Mahajan, A. Codas, C. Simoes, S. Agarwal, H. Khan, C. Peris, S. Rawls, A. Rosenbaum, A. Rumshisky et al., X. Chen, A. Razdaibiedina, E. Jones, K. Aggarwal, H. Palangi, “Alexatm 20b: Few-shot learning using a large-scale multilingual G. Zheng, C. Rosset, H. Khanpour, and A. Awadallah, “Orca 2: seq2seq model,” arXiv preprint arXiv:2208.01448, 2022. Teaching small language models how to reason,” 2023. [90]A. Glaese, N. McAleese, M. Trebacz, J. Aslanides, V. Firoiu, [111]L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and T. Ewalds, M. Rauh, L. Weidinger, M. Chadwick, P. Thacker et al., G. Neubig, “Pal: Program-aided language models,” in International “Improving alignment of dialogue agents via targeted human judge- Conference on Machine Learning. PMLR, 2023, pp. 10764–10799. ments,” arXiv preprint arXiv:2209.14375, 2022. [112]Anthropic. claude. [Online]. Available: https://www.anthropic.com/ [91]A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, news/introducing-claude V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo et al., [113]E. Nijkamp, H. Hayashi, C. Xiong, S. Savarese, and Y. Zhou, “Solving quantitative reasoning problems with language models,” “Codegen2: Lessons for training llms on programming and natural Advances in Neural Information Processing Systems, vol. 35, pp. languages,” arXiv preprint arXiv:2305.02309, 2023. 3843–3857, 2022. [92]Y. Tay, M. Dehghani, V. Q. Tran, X. Garcia, D. Bahri, T. Schuster, [114]L. Tunstall, E. Beeching, N. Lambert, N. Rajani, K. Rasul, Y. Belkada, H. S. Zheng, N. Houlsby, and D. Metzler, “Unifying language learning S. Huang, L. von Werra, C. Fourrier, N. Habib et al., “Zephyr: Direct paradigms,” arXiv preprint arXiv:2205.05131, 2022. distillation of lm alignment,” arXiv preprint arXiv:2310.16944, 2023. [93]T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili ´c, D. Hesslow, [115]X. team. Grok. [Online]. Available: https://grok.x.ai/ R.Castagn´e,A.S.Luccioni,F.Yvon,M.Gall´eetal.,“Bloom:A176b- [116]J. Bai, S. Bai, S. Ya
rier, N. Habib et al., “Zephyr: Direct paradigms,” arXiv preprint arXiv:2205.05131, 2022. distillation of lm alignment,” arXiv preprint arXiv:2310.16944, 2023. [93]T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili ´c, D. Hesslow, [115]X. team. Grok. [Online]. Available: https://grok.x.ai/ R.Castagn´e,A.S.Luccioni,F.Yvon,M.Gall´eetal.,“Bloom:A176b- [116]J. Bai, S. Bai, S. Yang, S. Wang, S. Tan, P. Wang, J. Lin, C. Zhou, parameter open-access multilingual language model,” arXiv preprint and J. Zhou, “Qwen-vl: A frontier large vision-language model with arXiv:2211.05100, 2022. versatile abilities,” arXiv preprint arXiv:2308.12966, 2023. [94]A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, [117]mixtral. mixtral. [Online]. Available: https://mistral.ai/news/ W. Zheng, X. Xia et al., “Glm-130b: An open bilingual pre-trained mixtral-of-experts/ model,” arXiv preprint arXiv:2210.02414, 2022. [118]D. Wang, N. Raman, M. Sibue, Z. Ma, P. Babkin, S. Kaur, Y. Pei, [95]S. Biderman, H. Schoelkopf, Q. G. Anthony, H. Bradley, K. O’Brien, A. Nourbakhsh, and X. Liu, “Docllm: A layout-aware generative E. Hallahan, M. A. Khan, S. Purohit, U. S. Prashanth, E. Raff et al., language model for multimodal document understanding,” 2023. “Pythia: A suite for analyzing large language models across train- [119]D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, ing and scaling,” in International Conference on Machine Learning. Y. Wu, Y. K. Li, F. Luo, Y. Xiong, and W. Liang, “Deepseek-coder: PMLR, 2023, pp. 2397–2430. When the large language model meets programming – the rise of code [96]S. Mukherjee, A. Mitra, G. Jawahar, S. Agarwal, H. Palangi, and intelligence,” 2024. A. Awadallah, “Orca: Progressive learning from complex explanation [120]F. Wan, X. Huang, D. Cai, X. Quan, W. Bi, and S. Shi, “Knowledge traces of gpt-4,” arXiv preprint arXiv:2306.02707, 2023. fusion of large language models,” 2024. [97]R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim et al., “Starcoder: may the source [121]P. Zhang, G. Zeng, T. Wang, and W. Lu, “Tinyllama: An open-source be with you!” arXiv preprint arXiv:2305.06161, 2023. small language model,” 2024. [98]S. Huang, L. Dong, W. Wang, Y. Hao, S. Singhal, S. Ma, T. Lv, [122]C. Wu, Y. Gan, Y. Ge, Z. Lu, J. Wang, Y. Feng, P. Luo, and Y. Shan, L. Cui, O. K. Mohammed, Q. Liu et al., “Language is not all you “Llama pro: Progressive llama with block expansion,” 2024. need: Aligning perception with language models,” arXiv preprint [123]X. Amatriain, A. Sankar, J. Bing, P. K. Bodigutla, T. J. Hazen, and arXiv:2302.14045, 2023. M. Kazi, “Transformer models: an introduction and catalog,” 2023. [99]G. Team, R. Anil, S. Borgeaud, Y. Wu, J.-B. Alayrac, J. Yu, R. Soricut, [124]G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, J. Schalkwyk, A. M. Dai, A. Hauth et al., “Gemini: a family of highly
Amatriain, A. Sankar, J. Bing, P. K. Bodigutla, T. J. Hazen, and arXiv:2302.14045, 2023. M. Kazi, “Transformer models: an introduction and catalog,” 2023. [99]G. Team, R. Anil, S. Borgeaud, Y. Wu, J.-B. Alayrac, J. Yu, R. Soricut, [124]G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, J. Schalkwyk, A. M. Dai, A. Hauth et al., “Gemini: a family of highly H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay, “The refined- capable multimodal models,” arXiv preprint arXiv:2312.11805, 2023. web dataset for falcon llm: outperforming curated corpora with web [100]W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, data, and web data only,” arXiv preprint arXiv:2306.01116, 2023. J. Tompson, I. Mordatch, Y. Chebotar et al., “Inner monologue: [125]D. Hernandez, T. Brown, T. Conerly, N. DasSarma, D. Drain, S. El- Embodied reasoning through planning with language models,” arXiv Showk, N. Elhage, Z. Hatfield-Dodds, T. Henighan, T. Hume et al., preprint arXiv:2207.05608, 2022. “Scaling laws and interpretability of learning from repeated data,” [101]S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, arXiv preprint arXiv:2205.10487, 2022. J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti [126]P. Shaw, J. Uszkoreit, and A. Vaswani, “Self-attention with relative et al., “Using deepspeed and megatron to train megatron-turing position representations,” arXiv preprint arXiv:1803.02155, 2018. nlg 530b, a large-scale generative language model,” arXiv preprint arXiv:2201.11990, 2022. [127]J. Su, Y. Lu, S. Pan, B. Wen, and Y. Liu, “Roformer: En- [102]I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long- hanced transformer with rotary position embedding,” arXiv preprint document transformer,” arXiv preprint arXiv:2004.05150, 2020. arXiv:2104.09864, 2021. [103]S. Iyer, X. V. Lin, R. Pasunuru, T. Mihaylov, D. Simig, P. Yu, K. Shus- [128]O. Press, N. A. Smith, and M. Lewis, “Train short, test long: Attention ter, T. Wang, Q. Liu, P. S. Koura et al., “Opt-iml: Scaling language with linear biases enables input length extrapolation,” arXiv preprint model instruction meta learning through the lens of generalization,” arXiv:2108.12409, 2021. arXiv preprint arXiv:2212.12017, 2022. [129]G. Ke, D. He, and T.-Y. Liu, “Rethinking positional encoding in [104]Y. Hao, H. Song, L. Dong, S. Huang, Z. Chi, W. Wang, S. Ma, language pre-training,” arXiv preprint arXiv:2006.15595, 2020. and F. Wei, “Language models are general-purpose interfaces,” arXiv [130]N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, preprint arXiv:2206.06336, 2022. and J. Dean, “Outrageously large neural networks: The sparsely-gated [105]Z. Sun, Y. Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox, Y. Yang, mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017. and C. Gan, “Principle-driven self-alignment of language mod- [131]W. Fedus, B. Zoph, and N
s, Q. Le, G. Hinton, preprint arXiv:2206.06336, 2022. and J. Dean, “Outrageously large neural networks: The sparsely-gated [105]Z. Sun, Y. Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox, Y. Yang, mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017. and C. Gan, “Principle-driven self-alignment of language mod- [131]W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: Scaling els from scratch with minimal human supervision,” arXiv preprint to trillion parameter models with simple and efficient sparsity,” The arXiv:2305.03047, 2023. Journal of Machine Learning Research, vol. 23, no. 1, pp. 5232–5270, [106]W. E. team, “Palmyra-base Parameter Autoregressive Language 2022. Model,” https://dev.writer.com, 2023. [132]R. K. Mahabadi, S. Ruder, M. Dehghani, and J. Henderson, [107]——, “Camel-5b instructgpt,” https://dev.writer.com, 2023. “Parameter-efficient multi-task fine-tuning for transformers via shared [108]Yandex. Yalm. [Online]. Available: https://github.com/yandex/ hypernetworks,” 2021. YaLM-100B [133]S. Zhang, L. Dong, X. Li, S. Zhang, X. Sun, S. Wang, J. Li, R. Hu, [109]M. Team et al., “Introducing mpt-7b: a new standard for open-source, T. Zhang, F. Wu, and G. Wang, “Instruction tuning for large language commercially usable llms,” 2023. models: A survey,” 2023.[134]S. Mishra, D. Khashabi, C. Baral, and H. Hajishirzi, “Cross-task and O. Abend, “q 2 : Evaluating factual consistency in knowledge- generalization via natural language crowdsourcing instructions,” arXiv grounded dialogues via question generation and question answering,” preprint arXiv:2104.08773, 2021. in Proceedings of the 2021 Conference on Empirical Methods in [135]Y. Wang, Y. Kordi, S. Mishra, A. Liu, N. A. Smith, D. Khashabi, Natural Language Processing, M.-F. Moens, X. Huang, L. Specia, and H. Hajishirzi, “Self-instruct: Aligning language model with self and S. W.-t. Yih, Eds. Online and Punta Cana, Dominican Republic: generated instructions,” arXiv preprint arXiv:2212.10560, 2022. Association for Computational Linguistics, Nov. 2021, pp. 7856–7870. [136]K. Ethayarajh, W. Xu, D. Jurafsky, and D. Kiela. Kto. [Online]. [Online]. Available: https://aclanthology.org/2021.emnlp-main.619 Available: https://github.com/ContextualAI/HALOs/blob/main/assets/ [153]N. Dziri, H. Rashkin, T. Linzen, and D. Reitter, “Evaluating attribution report.pdf in dialogue systems: The BEGIN benchmark,” Transactions of the [137]P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and Association for Computational Linguistics, vol. 10, pp. 1066–1083, D. Amodei, “Deep reinforcement learning from human preferences,” 2022. [Online]. Available: https://aclanthology.org/2022.tacl-1.62 Advances in neural information processing systems, vol. 30, 2017.
dialogue systems: The BEGIN benchmark,” Transactions of the [137]P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and Association for Computational Linguistics, vol. 10, pp. 1066–1083, D. Amodei, “Deep reinforcement learning from human preferences,” 2022. [Online]. Available: https://aclanthology.org/2022.tacl-1.62 Advances in neural information processing systems, vol. 30, 2017. [154]S. Santhanam, B. Hedayatnia, S. Gella, A. Padmakumar, S. Kim, [138]H. Lee, S. Phatale, H. Mansoor, K. Lu, T. Mesnard, C. Bishop, V. Car- Y. Liu, and D. Z. Hakkani-T¨ur, “Rome was built in 1776: A case study bune, and A. Rastogi, “Rlaif: Scaling reinforcement learning from on factual correctness in knowledge-grounded response generation,” human feedback with ai feedback,” arXiv preprint arXiv:2309.00267, ArXiv, vol. abs/2110.05456, 2021. 2023. [155]S.Min,K.Krishna,X.Lyu,M.Lewis,W.tauYih,P.W.Koh,M.Iyyer, [139]R. Rafailov, A. Sharma, E. Mitchell, S. Ermon, C. D. Manning, and L. Zettlemoyer, and H. Hajishirzi, “Factscore: Fine-grained atomic C. Finn, “Direct preference optimization: Your language model is evaluation of factual precision in long form text generation,” 2023. secretly a reward model,” arXiv preprint arXiv:2305.18290, 2023. [156]D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, [140]S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, “Zero: Memory V. Chaudhary, and M. Young, “Machine learning: The high interest optimizations toward training trillion parameter models,” in SC20: In- credit card of technical debt,” in SE4ML: Software Engineering for ternational Conference for High Performance Computing, Networking, Machine Learning (NIPS 2014 Workshop), 2014. Storage and Analysis. IEEE, 2020, pp. 1–16. [157]Z.Zhang,A.Zhang,M.Li,andA.Smola,“Automaticchainofthought [141]B. Peng, E. Alcaide, Q. Anthony, A. Albalak, S. Arcadinho, H. Cao, prompting in large language models,” 2022. X. Cheng, M. Chung, M. Grella, K. K. GV et al., “Rwkv: Reinventing [158]S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and rnns for the transformer era,” arXiv preprint arXiv:2305.13048, 2023. K. Narasimhan, “Tree of thoughts: Deliberate problem solving with [142]E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, large language models,” 2023. and W. Chen, “Lora: Low-rank adaptation of large language models,” [159]P. Manakul, A. Liusie, and M. J. F. Gales, “Selfcheckgpt: Zero- arXiv preprint arXiv:2106.09685, 2021. resource black-box hallucination detection for generative large lan- [143]G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a guage models,” 2023. neural network,” arXiv preprint arXiv:1503.02531, 2015. [160]N. Shinn, F. Cassano, E. Berman, A. Gopinath, K. Narasimhan, [144]J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: and S. Yao, “Reflexion: Language agents with verbal reinforcement A survey,” International Journal of Computer Vision, vol. 129, pp.
guage models,” 2023. neural network,” arXiv preprint arXiv:1503.02531, 2015. [160]N. Shinn, F. Cassano, E. Berman, A. Gopinath, K. Narasimhan, [144]J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: and S. Yao, “Reflexion: Language agents with verbal reinforcement A survey,” International Journal of Computer Vision, vol. 129, pp. learning,” 2023. 1789–1819, 2021. [161]S. J. Zhang, S. Florin, A. N. Lee, E. Niknafs, A. Marginean, A. Wang, [145]Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. K.Tyser,Z.Chin,Y.Hicke,N.Singh,M.Udell,Y.Kim,T.Buonassisi, Bang, A. Madotto, and P. Fung, “Survey of hallucination in natural A. Solar-Lezama, and I. Drori, “Exploring the mit mathematics and language generation,” ACM Comput. Surv., vol. 55, no. 12, mar 2023. eecs curriculum using large language models,” 2023. [Online]. Available: https://doi.org/10.1145/3571730 [162]T. Wu, E. Jiang, A. Donsbach, J. Gray, A. Molina, M. Terry, and C. J. [146]N. McKenna, T. Li, L. Cheng, M. J. Hosseini, M. Johnson, and Cai, “Promptchainer: Chaining large language model prompts through M. Steedman, “Sources of hallucination by large language models on visual programming,” 2022. inference tasks,” 2023. [163]Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and [147]C.-Y. Lin, “ROUGE: A package for automatic evaluation of J. Ba, “Large language models are human-level prompt engineers,” summaries,” in Text Summarization Branches Out. Barcelona, Spain: 2023. Association for Computational Linguistics, Jul. 2004, pp. 74–81. [164]P. S. H. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, [Online]. Available: https://aclanthology.org/W04-1013 N. Goyal, H. K¨uttler, M. Lewis, W. Yih, T. Rockt¨aschel, S. Riedel, and D. Kiela, “Retrieval-augmented generation for knowledge-intensive [148]K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for NLP tasks,” CoRR, vol. abs/2005.11401, 2020. [Online]. Available: automatic evaluation of machine translation,” in Proceedings of the https://arxiv.org/abs/2005.11401 40th Annual Meeting of the Association for Computational Linguistics, [165]Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi, Y. Dai, J. Sun, and P. Isabelle, E. Charniak, and D. Lin, Eds. Philadelphia, Pennsylvania, H. Wang, “Retrieval-augmented generation for large language models: USA: Association for Computational Linguistics, Jul. 2002, pp. 311– A survey,” arXiv preprint arXiv:2312.10997, 2023. 318. [Online]. Available: https://aclanthology.org/P02-1040 [166]A. W. Services. (Year of publication, e.g., 2023) Question answering [149]B. Dhingra, M. Faruqui, A. Parikh, M.-W. Chang, D. Das, and using retrieval augmented generation with foundation models in W. Cohen, “Handling divergent reference texts when evaluating amazon sagemaker jumpstart. Acce
23. 318. [Online]. Available: https://aclanthology.org/P02-1040 [166]A. W. Services. (Year of publication, e.g., 2023) Question answering [149]B. Dhingra, M. Faruqui, A. Parikh, M.-W. Chang, D. Das, and using retrieval augmented generation with foundation models in W. Cohen, “Handling divergent reference texts when evaluating amazon sagemaker jumpstart. Accessed: Date of access, e.g., table-to-text generation,” in Proceedings of the 57th Annual Meeting December 5, 2023. [Online]. Available: https://shorturl.at/dSV47 of the Association for Computational Linguistics, A. Korhonen, [167]S.Pan,L.Luo,Y.Wang,C.Chen,J.Wang,andX.Wu,“Unifyinglarge D. Traum, and L. M`arquez, Eds. Florence, Italy: Association language models and knowledge graphs: A roadmap,” arXiv preprint for Computational Linguistics, Jul. 2019, pp. 4884–4895. [Online]. arXiv:2306.08302, 2023. Available: https://aclanthology.org/P19-1483 [150]Z. Wang, X. Wang, B. An, D. Yu, and C. Chen, “Towards faithful [168]Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi-Yu, Y. Yang, neural table-to-text generation with content-matching constraints,” J. Callan, and G. Neubig, “Active retrieval augmented generation,” in Proceedings of the 58th Annual Meeting of the Association 2023. for Computational Linguistics, D. Jurafsky, J. Chai, N. Schluter, [169]T. Schick, J. Dwivedi-Yu, R. Dess `ı, R. Raileanu, M. Lomeli, L. Zettle- and J. Tetreault, Eds. Online: Association for Computational moyer, N. Cancedda, and T. Scialom, “Toolformer: Language models Linguistics, Jul. 2020, pp. 1072–1086. [Online]. Available: https: can teach themselves to use tools,” 2023. //aclanthology.org/2020.acl-main.101 [170]B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, [151]H. Song, W.-N. Zhang, J. Hu, and T. Liu, “Generating persona consis- and M. T. Ribeiro, “Art: Automatic multi-step reasoning and tool-use tent dialogues by exploiting natural language inference,” Proceedings for large language models,” 2023. of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, pp. [171]Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, “Hugginggpt: 8878–8885, Apr. 2020. Solving ai tasks with chatgpt and its friends in huggingface,” arXiv [152]O. Honovich, L. Choshen, R. Aharoni, E. Neeman, I. Szpektor, preprint arXiv:2303.17580, 2023.[172]Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, [189]D. Khashabi, S. Chaturvedi, M. Roth, S. Upadhyay, and D. Roth, S. Jin, E. Zhou et al., “The rise and potential of large language model “Looking beyond the surface:a challenge set for reading compre- based agents: A survey,” arXiv preprint arXiv:2309.07864, 2023. hension over multiple sentences,” in Proceedings of North American [173]L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, Chapter of the Association for Computational Linguistics (NAACL), J. Tang, X. Chen, Y. Lin et al., “A survey on large language model 2018. based autono
reading compre- based agents: A survey,” arXiv preprint arXiv:2309.07864, 2023. hension over multiple sentences,” in Proceedings of North American [173]L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, Chapter of the Association for Computational Linguistics (NAACL), J. Tang, X. Chen, Y. Lin et al., “A survey on large language model 2018. based autonomous agents,” arXiv preprint arXiv:2308.11432, 2023. [190]K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, [174]Z. Durante, Q. Huang, N. Wake, R. Gong, J. S. Park, B. Sarkar, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and R. Taori, Y. Noda, D. Terzopoulos, Y. Choi, K. Ikeuchi, H. Vo, L. Fei- J. Schulman, “Training verifiers to solve math word problems,” Fei, and J. Gao, “Agent ai: Surveying the horizons of multimodal CoRR, vol. abs/2110.14168, 2021. [Online]. Available: https: interaction,” arXiv preprint arXiv:2401.03568, 2024. //arxiv.org/abs/2110.14168 [175]B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, and D. Xu, “Rewoo: [191]D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, Decoupling reasoning from observations for efficient augmented lan- D. Song, and J. Steinhardt, “Measuring mathematical problem solving guage models,” 2023. with the MATH dataset,” CoRR, vol. abs/2103.03874, 2021. [Online]. [176]S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, Available: https://arxiv.org/abs/2103.03874 “React: Synergizing reasoning and acting in language models,” 2023. [192]R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, “Hellaswag: [177]V. Nair, E. Schumacher, G. Tso, and A. Kannan, “Dera: Enhanc- Can a machine really finish your sentence?” 2019. ing large language model completions with dialog-enabled resolving [193]P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, agents,” 2023. and O. Tafjord, “Think you have solved question answering? try [178]Y. Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu, H. Chen, X. Yi, arc, the AI2 reasoning challenge,” CoRR, vol. abs/1803.05457, 2018. C. Wang, Y. Wang, W. Ye, Y. Zhang, Y. Chang, P. S. Yu, Q. Yang, [Online]. Available: http://arxiv.org/abs/1803.05457 and X. Xie, “A survey on evaluation of large language models,” 2023. [194]Y. Bisk, R. Zellers, R. L. Bras, J. Gao, and Y. Choi, “PIQA: [179]T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, reasoning about physical commonsense in natural language,” CoRR, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, vol. abs/1911.11641, 2019. [Online]. Available: http://arxiv.org/abs/ L. Jones, M. Kelcey, M.-W. Chang, A. M. Dai, J. Uszkoreit, 1911.11641 Q. Le, and S. Petrov, “Natural questions: A benchmark for [195]M. Sap, H. Rashkin, D. Chen, R. L. Bras, and Y. Choi, “Socialiqa: question answering research,” Transactions of the Association for Commonsense reasoning about social interactions,” CoRR, vo
Available: http://arxiv.org/abs/ L. Jones, M. Kelcey, M.-W. Chang, A. M. Dai, J. Uszkoreit, 1911.11641 Q. Le, and S. Petrov, “Natural questions: A benchmark for [195]M. Sap, H. Rashkin, D. Chen, R. L. Bras, and Y. Choi, “Socialiqa: question answering research,” Transactions of the Association for Commonsense reasoning about social interactions,” CoRR, vol. Computational Linguistics, vol. 7, pp. 452–466, 2019. [Online]. abs/1904.09728, 2019. [Online]. Available: http://arxiv.org/abs/1904. Available: https://aclanthology.org/Q19-1026 09728 [180]D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and [196]T. Mihaylov, P. Clark, T. Khot, and A. Sabharwal, “Can a suit of J. Steinhardt, “Measuring massive multitask language understanding,” armor conduct electricity? A new dataset for open book question 2021. answering,” CoRR, vol. abs/1809.02789, 2018. [Online]. Available: [181]J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, http://arxiv.org/abs/1809.02789 E. Jiang, C. Cai, M. Terry, Q. Le et al., “Program synthesis with large [197]S. Lin, J. Hilton, and O. Evans, “Truthfulqa: Measuring how models language models,” arXiv preprint arXiv:2108.07732, 2021. mimic human falsehoods,” arXiv preprint arXiv:2109.07958, 2021. [182]E. Choi, H. He, M. Iyyer, M. Yatskar, W.-t. Yih, Y. Choi, P. Liang, [198]Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and L. Zettlemoyer, “QuAC: Question answering in context,” in and C. D. Manning, “Hotpotqa: A dataset for diverse, explainable Proceedings of the 2018 Conference on Empirical Methods in Natural multi-hop question answering,” CoRR, vol. abs/1809.09600, 2018. Language Processing, E. Riloff, D. Chiang, J. Hockenmaier, and [Online]. Available: http://arxiv.org/abs/1809.09600 J. Tsujii, Eds. Brussels, Belgium: Association for Computational [199]Y. Zhuang, Y. Yu, K. Wang, H. Sun, and C. Zhang, “Toolqa: A Linguistics, Oct.-Nov. 2018, pp. 2174–2184. [Online]. Available: dataset for llm question answering with external tools,” arXiv preprint https://aclanthology.org/D18-1241 arXiv:2306.13304, 2023. [183]D. Hendrycks, S. Basart, S. Kadavath, M. Mazeika, A. Arora, E. Guo, [200]D. Chen, J. Bolton, and C. D. Manning, “A thorough examination C. Burns, S. Puranik, H. He, D. Song, and J. Steinhardt, “Measuring of the cnn/daily mail reading comprehension task,” in Association for coding challenge competence with apps,” NeurIPS, 2021. Computational Linguistics (ACL), 2016. [184]V. Zhong, C. Xiong, and R. Socher, “Seq2sql: Generating structured [201]R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang et al., “Abstractive text queries from natural language using reinforcement learning,” arXiv summarization using sequence-to-sequence rnns and beyond,” arXiv preprint arXiv:1709.00103, 2017. preprint arXiv:1602.06023, 2016. [185]M. Joshi, E. Choi, D. Weld, and L.
q2sql: Generating structured [201]R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang et al., “Abstractive text queries from natural language using reinforcement learning,” arXiv summarization using sequence-to-sequence rnns and beyond,” arXiv preprint arXiv:1709.00103, 2017. preprint arXiv:1602.06023, 2016. [185]M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer, “TriviaQA: [202]Y. Bai and D. Z. Wang, “More than reading comprehension: A survey A large scale distantly supervised challenge dataset for reading on datasets and metrics of textual question answering,” arXiv preprint comprehension,” in Proceedings of the 55th Annual Meeting of the arXiv:2109.12264, 2021. Association for Computational Linguistics (Volume 1: Long Papers), [203]H.-Y. Huang, E. Choi, and W.-t. Yih, “Flowqa: Grasping flow in R. Barzilay and M.-Y. Kan, Eds. Vancouver, Canada: Association history for conversational machine comprehension,” arXiv preprint for Computational Linguistics, Jul. 2017, pp. 1601–1611. [Online]. arXiv:1810.06683, 2018. Available: https://aclanthology.org/P17-1147 [204]S.Lee,J.Lee,H.Moon,C.Park,J.Seo,S.Eo,S.Koo,andH.Lim,“A [186]G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy, “RACE: Large-scale survey on evaluation metrics for machine translation,” Mathematics, ReAding comprehension dataset from examinations,” in Proceedings vol. 11, no. 4, p. 1006, 2023. of the 2017 Conference on Empirical Methods in Natural Language [205]J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “Halueval: Processing, M. Palmer, R. Hwa, and S. Riedel, Eds. Copenhagen, A large-scale hallucination evaluation benchmark for large language Denmark: Association for Computational Linguistics, Sep. 2017, pp. models,”in Proceedingsofthe2023ConferenceonEmpiricalMethods 785–794. [Online]. Available: https://aclanthology.org/D17-1082 in Natural Language Processing, 2023, pp. 6449–6464. [187]P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuAD: 100,000+ [206]Simon Mark Hughes, “Hughes hallucination evaluation model questions for machine comprehension of text,” in Proceedings of (hhem) leaderboard,” 2024, https://huggingface.co/spaces/vectara/ the 2016 Conference on Empirical Methods in Natural Language Hallucination-evaluation-leaderboard, Last accessed on 2024-01-21. Processing, J. Su, K. Duh, and X. Carreras, Eds. Austin, Texas: Association for Computational Linguistics, Nov. 2016, pp. 2383–2392. [207]J. Kaddour, J. Harris, M. Mozes, H. Bradley, R. Raileanu, and [Online]. Available: https://aclanthology.org/D16-1264 R. McHardy, “Challenges and applications of large language models,” [188]C. Clark, K. Lee, M. Chang, T. Kwiatkowski, M. Collins, and arXiv preprint arXiv:2307.10169, 2023. K. Toutanova, “Boolq: Exploring the surprising difficulty of natural [208]S. Gunasekar, Y. Zhang, J. Aneja, C. C. T. Mendes, A. Del Giorno, yes/no questions,” CoRR, vol. abs/1905.10044, 2019. [Online]. S. Gopi, M. Javaheripi, P. Kauffmann, G. de Rosa, O. Saarikivi e
uage models,” [188]C. Clark, K. Lee, M. Chang, T. Kwiatkowski, M. Collins, and arXiv preprint arXiv:2307.10169, 2023. K. Toutanova, “Boolq: Exploring the surprising difficulty of natural [208]S. Gunasekar, Y. Zhang, J. Aneja, C. C. T. Mendes, A. Del Giorno, yes/no questions,” CoRR, vol. abs/1905.10044, 2019. [Online]. S. Gopi, M. Javaheripi, P. Kauffmann, G. de Rosa, O. Saarikivi et al., Available: http://arxiv.org/abs/1905.10044 “Textbooks are all you need,” arXiv preprint arXiv:2306.11644, 2023.[209]Y. Li, S. Bubeck, R. Eldan, A. Del Giorno, S. Gunasekar, and Y. T. [237]prompttools. prompttools. [Online]. Available: https://github.com/ Lee, “Textbooks are all you need ii: phi-1.5 technical report,” arXiv hegelai/prompttools preprint arXiv:2309.05463, 2023. [238]promptfoo. promptfoo. [Online]. Available: https://github.com/ [210]M. Poli, S. Massaroli, E. Nguyen, D. Y. Fu, T. Dao, S. Baccus, promptfoo/promptfoo Y. Bengio, S. Ermon, and C. R´e, “Hyena hierarchy: Towards larger [239]facebook. faiss. [Online]. Available: https://github.com/ convolutional language models,” 2023. facebookresearch/faiss [211]M. Poli, J. Wang, S. Massaroli, J. Quesnelle, E. Nguyen, and [240]milvus. milvus. [Online]. Available: https://github.com/milvus-io/ A. Thomas, “StripedHyena: Moving Beyond Transformers with milvus Hybrid Signal Processing Models,” 12 2023. [Online]. Available: [241]qdrant. qdrant. [Online]. Available: https://github.com/qdrant/qdrant https://github.com/togethercomputer/stripedhyena [242]weaviate. weaviate. [Online]. Available: https://github.com/weaviate/ [212]D. Y. Fu, S. Arora, J. Grogan, I. Johnson, S. Eyuboglu, A. W. Thomas, weaviate B. Spector, M. Poli, A. Rudra, and C. R´e, “Monarch mixer: A simple [243]llama index. llama-index. [Online]. Available: https://github.com/ sub-quadratic gemm-based architecture,” 2023. run-llama/llama index [213]G. J. McLachlan, S. X. Lee, and S. I. Rathnayake, “Finite mixture models,” Annual review of statistics and its application, vol. 6, pp. 355–378, 2019. APPENDIX [214]H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual instruction tuning,” arXiv preprint arXiv:2304.08485, 2023. 1. Open Source Toolkits For LLM Development and [215]S. Liu, H. Cheng, H. Liu, H. Zhang, F. Li, T. Ren, X. Zou, Deployment J. Yang, H. Su, J. Zhu, L. Zhang, J. Gao, and C. Li, “Llava-plus: There are various frameworks and libraries developed for Learning to use tools for creating multimodal agents,” arXiv preprint arXiv:2311.05437, 2023. LLM training, evaluation, and deployment, and covering every [216]S. Wu, H. Fei, L. Qu, W. Ji, and T.-S. Chua, “Next-gpt: Any-to-any single framework is out of this paper’s scope. But we try to multimodal llm,” arXiv preprint arXiv:2309.05519, 2023. provide a brief introduction of some of the most popular ones, [217]N. N. Khasmakhi, M. Asgari-Chenaghlu, N. Asghar, P. Schaer, and grouped into different
LLM training, evaluation, and deployment, and covering every [216]S. Wu, H. Fei, L. Qu, W. Ji, and T.-S. Chua, “Next-gpt: Any-to-any single framework is out of this paper’s scope. But we try to multimodal llm,” arXiv preprint arXiv:2309.05519, 2023. provide a brief introduction of some of the most popular ones, [217]N. N. Khasmakhi, M. Asgari-Chenaghlu, N. Asghar, P. Schaer, and grouped into different categories. D. Z¨uhlke, “Convgenvismo: Evaluation of conversational generative vision models,” 2023. A. LLM Training/Inference Frameworks [218]N. Alshahwan, J. Chheda, A. Finegenova, B. Gokkaya, M. Harman, I. Harper, A. Marginean, S. Sengupta, and E. Wang, “Automated unit Some of the popular frameworks which are useful for LLM test improvement using large language models at meta,” arXiv preprint training includes (note that some of them can be used beyond arXiv:2402.09171, 2024. LLM training too): [219]L. Sun, Y. Huang, H. Wang, S. Wu, Q. Zhang, C. Gao, Y. Huang, W. Lyu, Y. Zhang, X. Li et al., “Trustllm: Trustworthiness in large DeepSpeed [220] is a deep learning optimization library language models,” arXiv preprint arXiv:2401.05561, 2024. that makes distributed training and inference easy, efficient, [220]Microsoft. Deepspeed. [Online]. Available: https://github.com/ and effective. DeepSpeed enables world’s most powerful lan- microsoft/DeepSpeed guage models like MT-530B and BLOOM. It is an easy- [221]HuggingFace. Transformers. [Online]. Available: https://github.com/ to-use deep learning optimization software suite that powers huggingface/transformers [222]Nvidia. Megatron. [Online]. Available: https://github.com/NVIDIA/ unprecedented scale and speed for both training and inference. Megatron-LM With DeepSpeed you can: [223]BMTrain. Bmtrain. [Online]. Available: https://github.com/OpenBMB/ Transformers [221] is library by HuggingFace which BMTrain provides thousands of pretrained models to perform tasks on [224]EleutherAI. gpt-neox. [Online]. Available: https://github.com/ different modalities such as text, vision, and audio. Using EleutherAI/gpt-neox pretrained models one can reduce compute costs, carbon [225]microsoft. Lora. [Online]. Available: https://github.com/microsoft/ footprint, and save the time and resources required to train LoRA [226]ColossalAI. Colossalai. [Online]. Available: https://github.com/ a model from scratch. hpcaitech/ColossalAI Megatron-LM [222] is a large, powerful transformer [227]FastChat. Fastchat. [Online]. Available: https://github.com/lm-sys/ developed by the Applied Deep Learning Research team FastChat at NVIDIA. It contains efficient, model-parallel (tensor, se- [228]skypilot. skypilot. [Online]. Available: https://github.com/skypilot-org/ quence, and pipeline), and multi-node pre-training of trans- skypilot former based models such as GPT, BERT, and T5 using mixed [229]vllm. vllm.
eep Learning Research team FastChat at NVIDIA. It contains efficient, model-parallel (tensor, se- [228]skypilot. skypilot. [Online]. Available: https://github.com/skypilot-org/ quence, and pipeline), and multi-node pre-training of trans- skypilot former based models such as GPT, BERT, and T5 using mixed [229]vllm. vllm. [Online]. Available: https://github.com/vllm-project/vllm precision. [230]huggingface. text-generation-inference. [Online]. Available: https: //github.com/huggingface/text-generation-inference BMTrain [223] is an efficient large model training toolkit [231]langchain. langchain. [Online]. Available: https://github.com/ that can be used to train large models with tens of billions of langchain-ai/langchain parameters. It can train models in a distributed manner while [232]bentoml. Openllm. [Online]. Available: https://github.com/bentoml/ keeping the code as simple as stand-alone training. OpenLLM [233]embedchain. embedchain. [Online]. Available: https://github.com/ GPT-NeoX [224] leverages many of the same features and embedchain/embedchain technologies as the popular Megatron-DeepSpeed library but [234]microsoft. autogen. [Online]. Available: https://github.com/microsoft/ with substantially increased usability and novel optimizations. autogen [235]babyagi. babyagi. [Online]. Available: https://github.com/ LoRA [225] library provides the support for Low-Rank yoheinakajima/babyagi Adaptation of Large Language Models. It reduces the number [236]guidance. guidance. [Online]. Available: https://github.com/ of trainable parameters by learning pairs of rank-decompostion guidance-ai/guidance matrices while freezing the original weights. This vastlyreduces the storage requirement for large language models relevant embeddings, and stores them in a vector database for adapted to specific tasks and enables efficient task-switching optimized retrieval. during deployment all without introducing inference latency. Autogen [234] is a framework that enables the devel- LoRA also outperforms several other adaptation methods in- opment of LLM applications using multiple agents that can cluding adapter, prefix-tuning, and fine-tuning. converse with each other to solve tasks. AutoGen agents ColossalAI library [226] provides a collection of parallel are customizable, conversable, and seamlessly allow human components. It aims to support developers to write their participation. They can operate in various modes that employ distributed deep learning models just like how they write their combinations of LLMs, human inputs, and tools. model on their laptop. They provide user-friendly tools to BabyAGI [235] is an autonomous Artificial Intelligence kickstart distributed training and inference in a few lines. In agent, that is designed to generate and execute tasks based on terms of Parallelism strategies, they support: Data Parallelism, given objectives. It harnesses cutting-edge technologies from Pipeline Parallelism, Sequence Parallelism, Zero Redundancy OpenAI, Pinecone, LangChain, and Chroma to automate tasks Optimizer (ZeRO) [140], and A
BabyAGI [235] is an autonomous Artificial Intelligence kickstart distributed training and inference in a few lines. In agent, that is designed to generate and execute tasks based on terms of Parallelism strategies, they support: Data Parallelism, given objectives. It harnesses cutting-edge technologies from Pipeline Parallelism, Sequence Parallelism, Zero Redundancy OpenAI, Pinecone, LangChain, and Chroma to automate tasks Optimizer (ZeRO) [140], and Auto-Parallelism. and achieve specific goals. In this blog post, we will dive into the unique features of BabyAGI and explore how it can B. Deployment Tools streamline task automation. We provide an overview of some of the most popular LLM C. Prompting Libraries deployment tools here. Guidance [236] is a programming paradigm that offers FastChat [227] is an open platform for training, serv- superior control and efficiency compared to conventional ing, and evaluating large language model based chatbots. prompting and chaining. It allows users to constrain generation FastChat’s core features include: The training and evaluation (e.g. with regex and CFGs) as well as to interleave control code for state-of-the-art models (e.g., Vicuna, MT-Bench), and (conditional, loops) and generation seamlessly. a distributed multi-model serving system with web UI and PromptTools [237] offers a set of open-source, self- OpenAI-compatible RESTful APIs. hostable tools for experimenting with, testing, and evaluating Skypilot [228] is a framework for running LLMs, AI, LLMs, vector databases, and prompts. The core idea is to and batch jobs on any cloud, offering maximum cost savings, enable developers to evaluate using familiar interfaces like highest GPU availability, and managed execution. code, notebooks, and a local playground. vLLM [229] is a fast and easy-to-use library for LLM in- PromptBench [?] is a Pytorch-based Python package for ferenceandserving.vLLMseamlesslysupportsmanyHugging Evaluation of Large Language Models (LLMs). It provides Face models, including the following architectures: Aquila, user-friendly APIs for researchers to conduct evaluation on Baichuan, BLOOM, ChatGLM, DeciLM, Falcon, GPT Big- LLMs. Code, LLaMA, LLaMA 2, Mistral, Mixtral, MPT, OPT, Qwen, Promptfoo [238] is a tool for testing and evaluating LLM Yi, and many more. output quality. It systematically test prompts, models, and text-generation-inference [230] is a toolkit for deploying RAGs with predefined test cases. and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open- D. VectorDB source LLMs, including Llama, Falcon, StarCoder, BLOOM, Faiss [239] is a library developed by Facebook AI Re- GPT-NeoX, and more. search that provides efficient similarity search and clustering LangChain [231] is a framework for developing applica- of dense vectors. It is designed for use with large-scale, tionspoweredbylanguagemodels.Itenablesapplicationsthat: high-dimensional data and supports several index types and algorithms for v
ore. search that provides efficient similarity search and clustering LangChain [231] is a framework for developing applica- of dense vectors. It is designed for use with large-scale, tionspoweredbylanguagemodels.Itenablesapplicationsthat: high-dimensional data and supports several index types and algorithms for various use cases. • Are context-aware: connect a language model to Milvus [240] is an open-source vector database built to sources of context (prompt instructions, few shot ex- power embedding similarity search and AI applications. Mil- amples, content to ground its response in, etc.) vus makes unstructured data search more accessible, and pro- • Reason: rely on a language model to reason (about videsaconsistentuserexperienceregardlessofthedeployment how to answer based on provided context, what ac- environment. tions to take, etc.) Qdrant [241] is a vector similarity search engine and OpenLLM [232] is an open-source platform designed to vector database. It provides a production-ready service with a facilitate the deployment and operation of large language mod- convenient API to store, search, and manage points—vectors els (LLMs) in real-world applications. With OpenLLM, you with an additional payload Qdrant is tailored to extended can run inference on any open-source LLM, deploy them on filtering support. environment. the cloud or on-premises, and build powerful AI applications. Weaviate [242] is an open-source, GraphQL-based vec- Embedchain [233] is an Open Source RAG Framework tor search engine that enables similarity search on high- that makes it easy to create and deploy AI apps. Embedchain dimensional data. While it is open-source, the commercial ver- streamlines the creation of RAG applications, offering a seam- sion offers additional features, support, and managed services. less process for managing various types of unstructured data. Some of the other popular options includes LlamaIndex It efficiently segments data into manageable chunks, generates [243] and Pinecone.
THE RL/LLM TAXONOMY TREE: REVIEWING SYNERGIES BETWEEN REINFORCEMENT LEARNING AND LARGE LANGUAGE MODELS Moschoula Pternea Prerna Singh Abir Chakraborty Microsoft Microsoft Microsoft mpternea@microsoft.com prernasingh@microsoft.com abir.chakraborty@microsoft.com Yagna Oruganti Mirco Milletari Sayli Bapat Microsoft Microsoft Microsoft yaorugan@microsoft.com mimillet@microsoft.com saylibapat@microsoft.com Kebei Jiang Microsoft kebei.jiang@microsoft.com ABSTRACT In this work, we review research studies that combine Reinforcement Learning (RL) and Large LanguageModels(LLMs), twoareasthatowe theirmomentumtothedevelopmentofdeepneural networks. Wepropose a novel taxonomyof three main classesbased on theway that the twomodel types interact with each other. The first class,RL4LLM, includes studies where RL is leveraged to improve the performance of LLMs on tasks related to Natural Language Processing. RL4LLM is divided into two sub-categories depending on whether RL is used to directly fine-tune an existing LLM orto improve the promptof the LLM. Inthe second class,LLM4RL, anLLM assists thetraining of an RL model that performs a task that is not inherently related to natural language. We further breakdownLLM4RLbasedonthecomponentoftheRLtrainingframeworkthattheLLMassists orreplaces,namelyrewardshaping,goalgeneration,andpolicyfunction. Finally,inthethirdclass, RL+LLM, an LLM and an RL agent are embedded in a common planning framework without either of themcontributingtotrainingorfine-tuningoftheother. Wefurtherbranchthisclasstodistinguish between studies with and without natural language feedback. We use this taxonomy to explore themotivationsbehindthesynergyofLLMsandRLandexplainthereasonsforitssuccess,while pinpointing potentialshortcomings and areaswhere further researchis needed, aswell as alternative methodologies that serve the same goal. 1 Introduction Reinforcement Learning (RL) and Large Language Models (LLMs) are experiencing tremendous progress over recent years,withthecommonfactorbehindthegrowthofbothArtificialIntelligencedomainsbeingthedevelopmentofDeep Neural Networks (DNNs). Thefoundationsof MarkovDecisionProcesses(MDPs), whichareatthecore ofeveryRL model,canpracticallybe traced backto themid-20th century [12], whenthey originatedin the fieldof stochasticcontrol [132] withthe goalto model sequentialdecision making in uncertainenvironments. Reinforcement Learningproposed a formal framework for approaching sequential decision making problems by adapting concepts from behavioral psychology, where an agent can learn by interacting with their environment and utilizing their past experience [118, 44]. However, it was the development of Deep Reinforcement Learning [44] that addressed the key challenges of traditional value and policy function approximations by tackling the curse of dimensionality through efficient state representation, betterRL/LLM Taxonomy Tree generalization, and sample efficiency. As a result, Deep RL algorithms have become increasingly popular over recent years, with applications in control systems, robotics, autonomous vehicles, healthcare, finance, to name only a few. Similarly, Natural Language Processing (NLP) problems, like speech recognition, natural language understanding, machine translation, text summarization, etc., have long been successfully
curse of dimensionality through efficient state representation, betterRL/LLM Taxonomy Tree generalization, and sample efficiency. As a result, Deep RL algorithms have become increasingly popular over recent years, with applications in control systems, robotics, autonomous vehicles, healthcare, finance, to name only a few. Similarly, Natural Language Processing (NLP) problems, like speech recognition, natural language understanding, machine translation, text summarization, etc., have long been successfully solved by machine learning algorithms, ranging from Naïve Bayes and Maximum Entropy Models to Decision Trees and Random Forests. [18] attributed the impressive development of NLP methods to three overlapping curves – Syntactics, Semantics, and Pragmatics – and foresaw the eventual evolution of NLP research to natural language understanding. Deep learning revolutionized NLPtaskswith variousNeuralNetworkarchitectures, suchasRecurrent NeuralNetworks(RNNs),Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and, more recently, Transformers [123]. Eventually, Deep NeuralNetworks areopeningnewavenuesin the fieldof NaturalLanguageProcessing withthedevelopmentofLLMs, which are language models trained on massive amounts of text using specialized hardware, like GPU and TPUs, to perform complicated NLP tasks. Apart from owing their growth to the development of Deep Neural networks, LLMs and RL are intertwined from a theoretical and practical perspective because they can both be formulated and approached as sequential modeling problems: LLMs generate text in a sequential decision-making framework, selecting the most likely next word or phrase. Asnotedby[105],“ifweviewtextgenerationasasequentialdecision-makingproblem,reinforcementlearning (RL) appears to be a natural conceptual framework”. On the other side, RL deals inherently with control problems, whereanagentmustselectthemostappropriateaction,interactwithitsenvironment,observetheresultofitsaction, and continue in a loop of state-observation-action-reward for a possibly infinite horizon. Motivated by the impressive prominence of Reinforcement Learning and Large Language Models, along with the impressiverangeofpracticalapplicationstheybothpresent,weperformacomprehensiveliteraturereviewofstudies thatembedLargeLanguageModelsandReinforcementLearningAgentsinacommoncomputationalframework. More specifically,weareproposinganoveltaxonomytoclassifythosestudiesbasedonthewaythattheLLMandtheRL agent interact in the framework. With this taxonomy as the backbone of our review, we break down the state-of-art frameworksintotheirfundamentalcomponentsandpresentdetailstodescribethewaysinwhichthetwomodeltypes collaborateineachstudy. Inparallel,weexplainthekeymotivationalfactorsandthereasonsbehindthesuccessofthis collaboration. We also review the potential limitations of this synergy and present alternative state-of-art methods that, while notparts of this taxonomy, have been developedwith theintent to addressthe sameissues as thestudies thatwe arefocusingon. Thisthoroughcategorizationwillhelpresearchersobtainabetterunderstandingofthedynamicsof this synergy, explain trendsand opportunities inthe intersection ofRL and LLMs,and serve asa starting pointfor the development of novel AI frameworks that combine the best of both these worlds. The rest of this paper is structured as follows: in section 2, we provide the fundamental terms and concepts around ReinforcementLearning, transformers,andLLMs,to facilitatethereader inunderstandingthe materialthatfollows, andoutlinethescopeandcontributionsofthisstudy. Insection3,weprovideanoverviewoftheproposedtaxonomy. Sections 4, 5 and 6 are dedicated to the main classes of our proposed taxonomy, corresponding toRL4LLM,LLM4RL, andRL+LLM, respectively. In section 7, we discuss the emerging patterns of this taxonomy and the reasons behind the success of this synergy, as well as shortcomings and alternative methods to achieve the same goals. Finally, in section 8 we summarize our findings and conclusions and propose paths for future research. 2 Backgrou
ionsofthisstudy. Insection3,weprovideanoverviewoftheproposedtaxonomy. Sections 4, 5 and 6 are dedicated to the main classes of our proposed taxonomy, corresponding toRL4LLM,LLM4RL, andRL+LLM, respectively. In section 7, we discuss the emerging patterns of this taxonomy and the reasons behind the success of this synergy, as well as shortcomings and alternative methods to achieve the same goals. Finally, in section 8 we summarize our findings and conclusions and propose paths for future research. 2 Background, State-of-Art, Scope, and Contributions 2.1 Overview of RL and LLMs Reinforcementlearningencompassesarangeofalgorithmscreatedtotackleproblemsthatrequireaseriesofdecisions anditdifferssignificantlyfrombothsupervisedandunsupervisedlearningmethods: itrequiresthelearningsystem,also referred to as an agent, to independently determine the best sequence of actions to achieve its goal through interaction withitsenvironment. Reinforcementmethodsareprimarilydividedintothreecategories: dynamicprogramming,Monte Carlomethods,andtemporaldifferencemethods. Allthesemethodspresentthedecision-makingissueasaMarkov decision process (MDP), a mathematical approach to solving sequential decision-making problems that involves a state setS , an action setA , a transition functionT , and a reward functionR . The goal of an MDP(S,A,T,R ) is to determine an optimal policy functionπ , which outlines the agent’s behavior at any given time. Essentially, a policy mapsthesetofstatesS perceivedfromtheenvironmenttoasetofactionsA thatshouldbeperformedinthosestates. Theobjectiveoftheagentistomaximizeacumulativerewardr ∈ R byselectingtheactionstoperformineachstate s. When in states, the agent performs actiona, receives rewardr from its environment, and then moves to the next states ′. Each stepcan therefore be representedas a transitiontuple(s,a,r,s ′). Theprocess to estimate thepolicyπ dependson thealgorithminuse andthespecificsof theproblem. Incertaininstances, especiallywhenthestate and 2RL/LLM Taxonomy Tree actionspacearetractable,thepolicycanbestoredasalookuptable,whileinothers, afunctionapproximator(sucha neural network) is used. WithintherealmofNaturalLanguageProcessing(NLP),LargeLanguageModels(LLMs)havebecomeaubiquitous component. The primary role of a language model is to establish a probability distribution across word sequences, a processachievedbytheapplicationofthechainruleofprobabilitytodissectthejointprobabilityofawordsequence intoconditional probabilities. Languagemodelsmay beunidirectional,forecasting futurewords basedonpast onesas seeninn-grammodels,orbidirectional,makingpredictionsforawordbasedonbothantecedentandsubsequentwords as exemplified by Transformer models. Owing to advancements in deep learning, neural language models have seen a surge in popularity. An LLM makes use of aspecific kind of neural network known as a transformer [123], which uses a mechanism called attention to weigh the influence of different words when producing an output. The term “large” in this context signifies the substantial number of parameters these models hold. LLMs are capable of responding to queries, authoring essays, summarizing texts, translating languages, and even generating poetry. Some of the most popular LLMs include BERT [37], GPT [16], PaLM [32], and LaMDA [119]. 2.2 State-of-Art Review Studies Asrapidlyadvancingfields,withawiderangeofapplications,bothReinforcementLearningandNaturalLanguage Processing have beenthe focusof numerousstudies thataim tosynthesize andevaluate state-of-artresearch ineach area. Sinceitfirstemerged,RLhasbeenofparticularinteresttoresearchersincomputerscience,robotics,andcontroland,as aresult, numeroussurveysonRL have beenpublished,ranging fromgeneraloverviewofRL [63, 5]to comprehensive reviews that focus on a particular technique (Offline RL, [98]; Meta-Reinforcement Learning, [11]; RL on graphs, [84]; Evolutionary RL, [7]; Hierarchical RL, [94]; Multi-Agent Deep RL, [51, 40], application (healthcare, [140]; robotics, [48]; combinatori
e andevaluate state-of-artresearch ineach area. Sinceitfirstemerged,RLhasbeenofparticularinteresttoresearchersincomputerscience,robotics,andcontroland,as aresult, numeroussurveysonRL have beenpublished,ranging fromgeneraloverviewofRL [63, 5]to comprehensive reviews that focus on a particular technique (Offline RL, [98]; Meta-Reinforcement Learning, [11]; RL on graphs, [84]; Evolutionary RL, [7]; Hierarchical RL, [94]; Multi-Agent Deep RL, [51, 40], application (healthcare, [140]; robotics, [48]; combinatorial optimization, [76]; generative AI, [20], learning assumptions (dynamically varying environment, [91]. OwingtotherapidemergenceofLLMs,wearealsobeginningtowitnessreviewpapersdedicatedtothistopic, like the comprehensive review on RLHF by [116]. A similar trend can be observed in Natural Language Processing, with numerous examples of survey studies providing an overall study of the concepts and methods in the field [120], particularly since the introduction of deep learning for NLP [89, 120, 31]. Similarly to the case of RL, literature review studies specialize by application (e.g., healthcare, [133];fakenewsdetection,[88];bioinformatics,[143]orfocusonparticularmethods(pretrainedmodels,[100],graphs, [82], etc. Notsurprisingly,LLMsthemselves,whichlieattheintersectionofNaturalLanguageProcessingandReinforcement Learning,haveattractedtheattentionofresearchersworldwide,resultingalreadyinanimpressivewealthofcomprehen- siveliteraturereviewpublications,rangingfromgeneralreviews[148,137,79,139,74]tosurveysfocusingondifferent aspects of LLMs, like evaluation [53, 23], alignment with humans [111, 128], explainability [147], Responsible AI considerations[47],aknowledgeacquisitionandupdating[19,126,92]aswellasusingLLMsforspecificapplications likeinformationretrieval[151],naturallanguageunderstanding[39],instructiontuning[144],softwareengineering [124, 43], recommendation systems [134, 70, 72], opinion prediction [65], and other applications. Aswillbeexplainedinmoredetailinsubsection2.3,thissurveyexaminesReinforcementLearningandLargeLanguage Modelsfromacompletelydifferentanglecomparedtotheaforementionedreviewpapers,sinceitfocusesexclusively on studies where RL and LLMs are both indispensable components of the same computational framework. 2.3 Scope of This Study As explained in section 1, we are presenting a survey on studies that combine Reinforcement Learning and Large Language Models in a common modeling framework and we are proposing a mew taxonomy to classify them. The taxonomyisvisualizedastheRL/LLMTaxonomyTree(Fig. 3),whichmapseachstudytoatreenode,accordingtothe details of the synergy between the two models. Although Reinforcement Learning – in its RLHF form – is an essential component of any Large Language Model, our reviewisonlyconcernedwithstudiesthatinvolvealreadytrainedLLMs,whicharetheneitherimprovedandfine-tuned with RL - beyond RLHF, that was used to train the original model - or combined with some RL agent to perform a downstreamtask. StudieswhereReinforcementLearningislimitedtotrainingtheoriginalLLMarebeyondthescope of our taxonomy. In addition, literature exhibits state-of-art survey papers that focus on the use of LLMs for tasks that are not related to naturallanguage,includingtheaugmentationofLLMswithreasoningandotherskills[58,78,130],multimodalLLMs 3RL/LLM Taxonomy Tree [127], and autonomous agents [125, 73]. While in this survey we are also reviewing studies where LLMs are used to perform generaltasks (sections 4and 6, we areexclusively focusingonthose where theRL agent, rather thatan LLM, isperformingadownstreamtask,andtheLLMassiststheframeworkeitherattraining (LLM4RLclass,section5orat inference (RL+LLM class, section 6). Therefore, contrary to [73] and [125], we are not concerned with evaluating the performance of the LLMs as autonomous agents. For the sake of completeness, we still discuss the use of LLMs to tasks that are not related to Natural Language, along with Multimodel LLMs in section 7. Finally,theuseofpretrainedlanguagemodelstoaidRLa
gonthose where theRL agent, rather thatan LLM, isperformingadownstreamtask,andtheLLMassiststheframeworkeitherattraining (LLM4RLclass,section5orat inference (RL+LLM class, section 6). Therefore, contrary to [73] and [125], we are not concerned with evaluating the performance of the LLMs as autonomous agents. For the sake of completeness, we still discuss the use of LLMs to tasks that are not related to Natural Language, along with Multimodel LLMs in section 7. Finally,theuseofpretrainedlanguagemodelstoaidRLagentsthroughrewarddesign[21],policypriors[30]orpolicy transfer [62] preceded the development of LLMs. While we refer to those studies for the sake of completeness, our taxonomy and subsequent analysis only captures those studies where the language model used is an LLM. 2.4 Contributions of This Study To the best of our knowledge, this is the first study that attempts a thorough review of the state-of-art research on the intersection of Large Language Models and Reinforcement Learning. To this direction, we have identified 24 publications that combine LLMs andRL and fall within the scope of this review as described in subsection 2.3. Our goalistoexaminehowthetwodistinctmodelsareembeddedinacommonframeworktoachieveaspecifictask. Aset of common patterns emerged from the review of those studies, which helped us categorize the studies according to the waythatthetwomodelscollaborate,aswellasthenatureoftheendtasktobeachieved. Therefore,weareproposinga novel,systematic taxonomy of thosestudies that helpsresearchers understand the scopeand applications of thevarious synergies between RL and LLMs. First, we follow the proposed taxonomy to individually present the key features, goals,andhighlightsofeachstudythatwehavereviewed. Then,weshiftourfocustoobtainingaglobalperspectiveon the collective goals of each study category and explain their strengths and potential shortcomings. In summary, the contribution of our work is threefold: 1. We collect, review, and analyze state-of-art studies which combine Reinforcement Learning and Large Language Models in the same framework. 2. WeproposeanoveltaxonomytoexplainthesynergybetweenRLandLLMs. Inparticular,wevisualizethe classification of the RL/LLM studies using the RL/LLM Taxonomy Tree, which includes three main classes whichcollectivelycaptureanyproblemthatutilizesbothRLandLLMs. Thecriterionforgeneratingthethree classesiswhetherRLisutilizedtoimprovetheperformanceofanLLM(class1– RL4LLM),oranLLMisused to train an RL agent to perform a non-NLP task (class 2 – LLM4RL), or whether the two models are trained independentlyandthenembeddedinacommonframeworktoachieveaplanningtask(class3– RL+LLM).The order in which we present the individual studies (contribution 1) is based on this taxonomy. 3.We utilize our findings from the taxonomy to discuss the applications of this synergy, explain the reasons for its success, identify strengths and potential weaknesses, and investigative alternative ways towards achieving the same tasks. 3 The RL/LLM Taxonomy Tree Even though LLMs are an emerging field, there exists a substantial body of literature dedicated to their intersection withReinforcementLearning. Wecanreadilydiscernatop-levelclassificationbasedontheinterplaybetweenthetwo models- RLagent andLLM –asthe keyclassification criterion. We havethereforeidentified thefollowing classesof studies, which constitute the core classes of our taxonomy: 1. RL4LLM. These studies use RL to improve the performance of the LLM in an NLP task. 2. LLM4RL.ThesestudiesuseanLLMtosupplementthetrainingofanRLmodelthatperformsa general task that is not inherently related to natural language. 3. RL+LLM.ThesestudiescombineRLmodelswithLLMmodelstoplanoverasetofskills,withoutusingeither model to train or fine-tune the other. TheframeworksbelongingtoRL4LLMclassstartfromatrainedLLMandsubsequentlyutilizeRLtomodifyit,withthe goal to improve its performance on specific tasks or align it to user
erformance of the LLM in an NLP task. 2. LLM4RL.ThesestudiesuseanLLMtosupplementthetrainingofanRLmodelthatperformsa general task that is not inherently related to natural language. 3. RL+LLM.ThesestudiescombineRLmodelswithLLMmodelstoplanoverasetofskills,withoutusingeither model to train or fine-tune the other. TheframeworksbelongingtoRL4LLMclassstartfromatrainedLLMandsubsequentlyutilizeRLtomodifyit,withthe goal to improve its performance on specific tasks or align it to user intent and ethical AI standards. On the contrary, studiesinLLM4RLcategoryutilize theLLMas acomponentof anRLtraining frameworkwith thegoalofhelping an RLagentperformaspecifictask. Finally,RL+LLMinvolvesthetwomodelsasindependentcomponentsofacommon framework, without either of them directly participating in the training or tuning of the other in any way. 4RL/LLM Taxonomy Tree Figure 1: The RL/LLM Taxonomy Tree. Interestingly, the way that the RL agent and the LLM types interact in each case is directly tied to the goal of the synergy,whichhelpsusidentifyamappingbetweenthestructureofeachframeworkanditsendgoal. Morespecifically, RL4LLMstudiesaimtoimprovetheperformanceoftheLLMinadownstreamtaskthatisrelatedtonaturallanguage processing,suchastextsummarization,question-answering,orconversation. Inturn,thegoalofLLM4RLframeworksis toimprovethetrainingefficiencyorperformanceofacontroltaskthatwouldstillrelyonRLintheabsenceoftheLLM and istherefore generally not related to NLP.Finally, studies ofRL+LLM generally usethe LLM to planover individual skills that have been learned through RL. Studieswithineachtop-levelclassoftheRL/LLMTaxonomyTreeexhibitsignificantvarietyinthewaythattheRL agent and the LLM interactin each framework. thus requiring furtherrefinement of the taxonomy. Specifically, studies withinRL4LLMcan be broken down into the following subcategories: • RL4LLM-Fine-tuning: EncompassesstudieswhereRLisusedtoperformmodelfine-tuning,whichinvolves tweakingthemodelparameters,untilthemodelachievesthedesiredperformance. Thissubclasscanbefurther refined according to the presence or absence of human feedback. • RL4LLM-Prompt Engineering: Includes studies where RL is used to iteratively update the prompt of the LLM,untilthemodelachievesthedesiredperformance. Similarly,LLM4RLcanbefurtherdividedaccording to the component of the RL framework that is assisted, replaced, or represented by the LLM, namely: • LLM4RL-Reward: Includes studies where the LLM is used to design the reward function of the RL agent. • LLM4RL-Goal: includesstudieswheretheLLMisutilizedforgoalsetting,whichappliestogoal-conditioned RL settings. • LLM4RL-Policy: includes studies where the LLM represents the policy function to be learned, or directly assists its training or pretraining. Finally,RL+LLMclass is branched into two subclasses: • RL+LLM-No Language Feedback: studies where the prompt of the LLM is updated during the planning process. • RL+LLM-With Language Feedback: studies where the prompt of the LLM stays fixed throughout the planning process. TheRL/LLMTaxonomyTreeisvisualizedinFig. 3,whileTable1mapstheresearchstudiestotheparticularsubclasses they belong to, corresponding to the leaf nodes of the RL/LLM Taxonomy Tree. To the best of our knowledge, the classification proposed by the RL/LLM Taxonomy Tree is exhaustive and captures all state-of-artstudiesthatfallwithinthescopeofourtaxonomy(2.3)asoftoday. Wehavesofaridentified24studiesthat fall under the scope of this review and can direcly be mapped to RL/LLM Taxonomy Tree leaves. Therefore, it has the potentialtoserveasareferenceandmappingtoolforresearchersandpractitionersofArtificialIntelligence. Inaddition, asresearcherscontinuedevelopingnovelwaystocombineReinforcementLearningwithLargeLanguage Models,the tree can be potentially expanded with new nodes.
ve and captures all state-of-artstudiesthatfallwithinthescopeofourtaxonomy(2.3)asoftoday. Wehavesofaridentified24studiesthat fall under the scope of this review and can direcly be mapped to RL/LLM Taxonomy Tree leaves. Therefore, it has the potentialtoserveasareferenceandmappingtoolforresearchersandpractitionersofArtificialIntelligence. Inaddition, asresearcherscontinuedevelopingnovelwaystocombineReinforcementLearningwithLargeLanguage Models,the tree can be potentially expanded with new nodes. 5RL/LLM Taxonomy Tree Table 1: Mapping Studies to RL/LLM Taxonomy Tree Leaves RL/LLM Taxonomy Studies Ouyang et al. [90] With Human Feedback Bai et al. [8] Fine-tuning Hu et al. [57] Bai et al. [9] RL4LLM Without Human Feedback Ramamurthy et al. [105] Ghalandari et al. [49] Zhang et al. [145] Prompt - Deng et al. [36] Sun [115] Perez et al. [96] Kwon et al. [67] Reward - Xie et al. [138] Ma et al. [75] Song et al. [113] Goal - Quartey et al. [101] LLM4RL Du et al. [41] Reid et al. [106] Policy - Hu and Sadigh [56] Zhang and Lu [146] Carta et al. [22] Without Language Feedback - Yuan et al. [142] RL+LLM Ahn et al. [3] With Language Feedback - Huang et al. [60] Dasgupta et al. [34] By bridging the gap between RL and LLMs, this taxonomy can be a valuable resource for researchers who are experienced in one of the two domains and are looking to venture into the other, and vice versa, as well as for anyone who wishes toexplore whether aframework combiningRL and LLMs is apromising solution for theproblem they are seeking toaddress. Moreimportantly, the taxonomyguides researchers asthey are shapingthe requirements oftheir application, w
Dasgupta et al. [34] By bridging the gap between RL and LLMs, this taxonomy can be a valuable resource for researchers who are experienced in one of the two domains and are looking to venture into the other, and vice versa, as well as for anyone who wishes toexplore whether aframework combiningRL and LLMs is apromising solution for theproblem they are seeking toaddress. Moreimportantly, the taxonomyguides researchers asthey are shapingthe requirements oftheir application, whether those are related to model performance, training efficiency, or responsible AI considerations. 4 RL4LLM: Using Reinforcement Learning to Enhance Large Language Models As explainedin section2, ReinforcementLearning withHuman Feedbackis anintegralpart forthe trainingof Large Language Models. Nevertheless, there is a wealth of studies where the synergy between RL and LLM extends beyond training of the LLM. Therefore,RL4LLM class includes a significant body of work where Reinforcement Learning is usedtofurther refineanalreadytrained LLMandimproveitsperformanceon NLP-relatedtasks. Thisimprovementin performance is measured according to the goal of each research, with the most common goals being the following: • Improved performance in downstream NLP tasks [36, 105, 49, 145, 115]. •Alignment with intent, values, and goals of user [105, 90]. •Alignment with Responsible AI considerations [96, 8, 9]. • Reduction of data and resource requirements [145, 115]. RL4LLM studies can be further divided into two major sub-categories: a) Studies that use knowledge from LLMs to buildan RLmodeltofine-tuneanLLM,orpartofit, toperformadownstreamNLPtask andb)studiesusingRLto designpromptstoqueryLLMs. Asummaryofthenaturallangaugeapplicationof eachframeworkisshowninTable 3. 4.1 RL4LLM-Fine-Tuning This class includes studies where RL is used for directly fine-tuning an existing LLM to make it more aligned with specific goals by updating an enormous set of LLM parameters. The presence of human feedback for fine-tuning servesasthecriterionforfurtherbranchingtheRL4LLM-Fine-tuningnodeofourtaxonomytree,resultingintwonew 6RL/LLM Taxonomy Tree subclasses: RL4LLM - Fine-tuning with human feedback (4.1.1 and RL4LLM - Fine-tuning without human feedback (4.1.2. 4.1.1 With human feedback HumaninputcanbecriticalwhenassessingthequalityoftheLLMoutputintermsofharmlessness. Preventingthe generationoftoxicandharmfulcontenthasbeenthefocusofmultiplestudiesevenbeforeLargeLanguageModels. For example,[114]trainedanRLagenttopredictwhichsummaryofagivenRedditpostismostlikelytobepreferredbya human. The authors used a supervised learning model as a reward function that selects a summary among multiple candidatesummaries. Theresultofthesummaryselectionisthenusedtofine-tunetheRLagentusingPPO.Authors found thatoptimizing thereward modelresulted inbetter human-preferredsummaries incomparison tousing typical NLPevaluationmetrics likeROUGE.Ina similar manner,theidea offine-tuninglanguagemodels throughRLagents extended naturally to the realm of LLMs. Human feedback can generally be embedded in the fine-tuning framework through the construction of the training dataset for the policy and reward models: For training the policy model, humans demonstrate the target behavior of the LLM, while for the reward model, they rank the alternative outputs of the LLM based on how well they align to the intent of the framework. For a study to be classified asRL4LLM-Fine-Tuning with human feedback, it should include human feedback in the training dataset of at least the initial policy model or the reward model; else, it belongs toRL4LLM-Fine-tuningwithout human feedback. Ouyangetal.[90]developedInstruct-GPT,anLLMcapableofcapturingandfollowingtheintentoftheuserwithout producing untruthful, toxic, or generally unhelpful content. Instruct-GPT consists of three steps. The first includes the training of the policy model as a supervised learning model. The training dataset is generated by c
human feedback, it should include human feedback in the training dataset of at least the initial policy model or the reward model; else, it belongs toRL4LLM-Fine-tuningwithout human feedback. Ouyangetal.[90]developedInstruct-GPT,anLLMcapableofcapturingandfollowingtheintentoftheuserwithout producing untruthful, toxic, or generally unhelpful content. Instruct-GPT consists of three steps. The first includes the training of the policy model as a supervised learning model. The training dataset is generated by collecting demonstrationsofthedesiredmodelbehavior. Togenerateeachnewdatapoint,apromptissampledfromabaseline prompt dataset and a human “labeler” demonstrates the desired behavior of the model. The dataset is then used to fine-tune GPT-3 [16] with supervised learning. The second step is the training of the reward model. Like the policy model, the reward model is also a supervised learning model, but it is trained on comparison data. To generate the training dataset of the reward model, a prompt and a set of model outputs are sampled for each data point, and a human “labeler”assignsrankingstotheoutputs. Finally,thethirdstepisGPT-3fine-tuning,withtherewardmodelembedded inan RLtrainingframework. ExperimentalevaluationinpublicNLP datasetsshowedthatInstruct-GPT demonstrates improved performance regarding truthfulness and harmlessness compared to its baseline model, with only minimal performance degradation, while also showing generalization capabilities to instructions outside the distribution present in the fine-tuning dataset. Bai et al. [8] also utilized preference modeling and RLHF to train helpful, honest, and harmless AI assistants. Like [90], they trained an initial policy by fine-tuning a pretrained LLM. First, a HHH (Helpful, Honest, and Harmless) Context-DistilledLanguageModelwasused tobuildabasedataset. Thisdatasetwasusedtotraina preferencemodel togenerate,inturn,anewdataset,usingrejectionsampling. Finally,theinitialpolicyandthepreferencemodelwere combined in an RLHF framework for fine-tuning the AI agent. Extensive data collection was performed by crowd workers,whowereinteractingwiththemodelsinopen-endedconversations. Thehumanfeedbackdata,alongwiththe preference modelsand the resultingRL policies, wereupdated on aweekly basis inan online mannerto improve the qualityofboththedatasetsandthemodelsthemselves. Asaresult,theauthorsachievedthedesiredalignmentbetween the language model and human preferences in almost all NLP evaluations, with friendly and deployable AI assistants, that also presented improved AI capabilities in NLP tasks, such as text summarization, even extending to specialized tasks, like python code generation. Hu et al. [57] propose an offline RLHF framework to align LLMs to human intent. Rather than the typical PPO architectureappliedinRLHFsettings,theyfine-tunetheLLMonpre-generatedsamplesinanofflinemanner,withina framework consisting of four steps: First, the pre-trained language model is fine-tuned on human-labeled instruction datausingasupervised learningmethod,resultinginanew modelcalledSFT.Second,they trainahumanpreference model (RM) to predict rewards, using binary loss or ranking loss functions. Third, they build a combined dataset consisting of both human-labeled data (used in training the SFT model in the first step) as well as model-generated data(generatedbytheSFTmodelusingpromptsfromtheuser,theSFTdataset,andtheRMdataset). Finally,theSFT modelisfine-tunedonthiscombineddatasetusingofflineRL.TheauthorsimplementedthreeofflineRLHFalgorithms, namelyMaximumLikelihoodEstimation(MLE)withFiltering[112],Reward-WeightedRegression[97],andDecision Transformer [24] 5.3.3, with specific data pre-processing methods and training loss function for every algorithm choice. The performance of the models was evaluated both by humans and by GPT-4[87], with the Decision Transformer architectureoutperformingbothMLEwithFilteringandReward-WeightedRegressionintermsofevaluationscore. The DecisionTransformer-basedofflinemethodwasalsoshowntoobtaincomparableresultstoPPO,whilealsoachieving faster convergence.
ing[112],Reward-WeightedRegression[97],andDecision Transformer [24] 5.3.3, with specific data pre-processing methods and training loss function for every algorithm choice. The performance of the models was evaluated both by humans and by GPT-4[87], with the Decision Transformer architectureoutperformingbothMLEwithFilteringandReward-WeightedRegressionintermsofevaluationscore. The DecisionTransformer-basedofflinemethodwasalsoshowntoobtaincomparableresultstoPPO,whilealsoachieving faster convergence. 7RL/LLM Taxonomy Tree 4.1.2 Without human feedback ThisclassofmethodsisprimarilyfocusedonthedevelopmentofresponsibleAIsystems. Interestingly,thepresenceof a human in the loop is not required for ensuring helpfulness and harmlessness of robotic assistants. As a result, this subclass includes studies where human feedback is either completely omitted or provided by a capable AI system. Ina variation of[8], Baiet al. [9] proposed“Constitutional AI”,aframeworkto trainAIassistants capableofhandling objectionable queries without being evasive by using AI Feedback. The AI model is trained through self-improvement, whilehumanfeedbackisrestrictedinprovidingalistofrulesandprinciples. ConstitutionalAIconsistsoftwophases, namely a supervised learning phase and a reinforcement learning phase. In the supervised learning phase, an initial helpful-onlyLLMassistantgeneratesresponsestoredteamingpromptsthataredesignedtotypicallyelicitharmful andtoxicresponses. Thisphase is theAIanalogueofahumandemonstratingthedesiredbehaviorin[90]and[57],or the use of the distilled model [8]. The model is asked to evaluate the response it provided based on a constitutional principleandthenreviseitsresponsebasedonthiscritique. Responsesarerepeatedlyrevisedateachstepoftheprocess by randomly drawing principles from the constitution. For the RL stage, a preference model is trained to act as the reward function using a training dataset generated by the trained model from the supervised learning stage: To generate a data point, the assistant is prompted again with a harmful prompt and is asked to select the best response frompair ofresponses,basedontheconstitutionalprinciples. ThisprocessproducesanAI-generatedpreferencedatasetfrom harmlessness,whichiscombinedwithahumanfeedback-generateddatasetforhelpfulness. Thepreferencemodelis thentrainedbasedonthiscombineddatasetandisusedtofine-tunethesupervisedmodelfromthefirststageinanRL framework that follows the general principles of RLHF - with the difference that the feedback is provided by the AI, hencetheterm “RLAIF”.TheresultingLLM respondstoharmfulqueries byexplainingits objectionstothem. This study is an example where alignment to human goals can be achieved with minimal human supervision. More recently, Ramamurthy et al. [105] examined whether RL is the best choice for aligning pre-trained LLMs to human preferences, compared to supervised learning techniques, given the challenges of training instability that RL algorithms might suffer from, as well as the lack of open-source libraries and benchmarks that are suitable for LLMfine-tuning. Toaddressthoseissues,theauthorsreleasedRL4LM,anopen-sourcelibrarybuiltonHuggingFace, which enables generative models to be trained with a variety of on-policy RL methods, such as PPO, TRPO, and A2C. The library provides a variety of reward functions and evaluation metrics. In addition, the authors composed GRUE(GeneralReinforced-language UnderstandingEvaluation)benchmark, asetofseven languagegenerationtasks which are supervised by reward functions that quantify metrics of human preference. Finally, they introduce NLPO (Natural Language Policy Optimization), an on-policy RL algorithm (also available in RL4LM) that dynamically learns task-specific constraints over the distribution of language to effectively reduce the combinatorial action space in language generation. The authorsprovided detailedresults ona case-by-case basisto determinewhen RLis preferred over supervised learning as well as when NLPO is preferred to PPO. How
rvised by reward functions that quantify metrics of human preference. Finally, they introduce NLPO (Natural Language Policy Optimization), an on-policy RL algorithm (also available in RL4LM) that dynamically learns task-specific constraints over the distribution of language to effectively reduce the combinatorial action space in language generation. The authorsprovided detailedresults ona case-by-case basisto determinewhen RLis preferred over supervised learning as well as when NLPO is preferred to PPO. However, NLPO demonstrated overall greater stability and performance than other policy gradient methods, such as PPO, while RL techniques were shown to generally outperform their supervised learning counterparts in terms of aligning LMs to human preferences. Ghalandari etal.[49]used Reinforcement Learning tofine-tune anLLM forsentence compressionwhile addressing the issue of inefficiency at inference time. In the specific task of this study, the goal is to summarize a sentence by extractingasubsetofitstokensin their originalorder. Given atokenizedinputsentencex,theoutputis abinaryvector indicatingwhetherthecorrespondinginputtokenisincludedinthecompressedsentenceornot. Toevaluatetheoutput ofthepolicy,arewardiscalculatedastheaverageofthreemetrics: fluency(forgrammaticallycorrectandwell-written sentences),similaritytosource(topreservethemeaningoftheoriginalsentence,measuredusingbi-encodersimilarity, cross-encodersimilarity,andcross-encoderNLI),andoutputlengthorcompressionratio(imposingsoftlengthcontrol using Gaussian reward functions). The policy was initialized using DistilRoBERTa [109], a six-layer a transformer encoder model with a linear classification head, and the RL agent was trained through a Policy Gradient method. The modelwas shownto outperformunsupervisedmodels (withno labelledexamples)while alsoenablingfast inference with one-step sequence labeling at test time and allowing for configurable rewards to adapt to specific use cases. 4.2 RL4LLM-Prompt Constructinga suitableprompt isthe firststep towards ensuringthat anLLM willgenerate thedesiredoutput interms ofrelevance,format,andethicalconsiderations. Indeed,carefulpromptengineeringisoftensufficientforaligningthe outputofanLLMtohumanpreferenceswithoutfine-tuningtheweightsoftheLLMitself,acomputationallyintensive processwhichusuallyrequiresextensivedatacollection. Promptingconcatenatestheinputswithanadditionalpieceof textthatdirectstheLLMtoproducethedesiredoutputs. Moststudiesfocusontuningsoftprompts(e.g.,embeddings), whicharedifficulttointerpretandnon-transferableacrossdifferentLLMs[36]. Ontheotherhand,discreteprompts, whichconsistofconcretetokensfromvocabulary,arehardtooptimizeefficiently,duetotheirdiscretenatureandthe difficultyofefficientspaceexploration. Toaddressthislimitation,thisclassofstudiesutilizeRL fordiscreteprompt 8RL/LLM Taxonomy Tree optimization, with the goal to enhance the performance of the LLM on diverse tasks, often requiring just a few training instances. TworecentstudiesinthisclassareTEMPERAandRLPROMPT,bothofwhichuseRoberta-largeasthebackground LLM. Proposed by Zhang et al. [145], TEMPERA (TEst-tiMe Prompt Editing using Reinforcement leArning) is a framework for automating the design of optimal prompts at test time. Prompt optimization is formulated as an RL problemwiththegoaltoincorporatehumanknowledgeandthusdesignpromptsthatareinterpretableandcanbeadapted to different queries. The RL agent performs different editing techniques at test time to construct query-dependent prompts efficiently. Theaction spaceallowsthe RLagent toedit instructions, in-contextexamples andverbalizersas needed, while the score differences between successive prompts before and after editing are used as reward. Unlike previousmethods,TEMPERAmakesuseofpriorhumanknowledgeandprovidesinterpretability;also,comparedto approaches like prompt tweaking, AutoPrompt, and RLPrompt, it also significantly improves performance on tasks like sentiment analysis, subject classification, natural language inference, etc.
ompts efficiently. Theaction spaceallowsthe RLagent toedit instructions, in-contextexamples andverbalizersas needed, while the score differences between successive prompts before and after editing are used as reward. Unlike previousmethods,TEMPERAmakesuseofpriorhumanknowledgeandprovidesinterpretability;also,comparedto approaches like prompt tweaking, AutoPrompt, and RLPrompt, it also significantly improves performance on tasks like sentiment analysis, subject classification, natural language inference, etc. On the other hand,RLPROMPTby Deng et al.[36]is an optimizationapproach where a policy network is trainedto generate desiredprompts. The experimental resultsshowthat the policy istransferable acrossdifferent LMs, which allowslearningcheaplyfromsmallermodelsandinferforlarger,morepowerfulmodels. Theauthorsalsonotedthat optimized prompts were often grammatical “gibberish”, which indicates that high-quality LM prompting does not necessarilyfollowhumanlanguagepatterns. However,RLPROMPTtreatstheLLMasablackboxmodelwithonly access to the generated output whereas TEMPERA assumes it to have access to the embedding vectors. The policy models are similar withGPT encoder but the actionspace is very different, since TEMPERAuses only discrete actions, whereas RLPROMPT treats the entire vocabulary as possible actions. Finally, the performance of TEMPERA was benchmarked for text classification tasks, whereas RLPROMPT was applied to text generation. More recently, Sun [115] proposed Prompt-OIRL, a framework that uses offline reinforcement learning to achieve cost-efficient and context-aware prompt design. The framework utilizes readily available offline datasets generated throughexpertevaluationofpreviouslycraftedprompts. First,theauthorsapplyinverse-RLtolearnaproxyreward model that can perform query-dependent offline prompt evaluations. Then, they use this model as an offline evaluator toperformquery-dependentpromptoptimization. Contraryto[36],whoperformtask-agnosticpromptoptimization, Prompt-OIRLperformsquery-dependentpromptevaluation,similarlyto[145]. Thedependenceofpromptevaluation on the query allows for context awareness, which helps the prompt evaluator predict what prompting techniques (e.g., Chain of Thought) are most likely to obtain correct answer given a specific prompt (e.g., an arithmetic question). The design of the proxy reward function allows for offline query-dependent evaluation thus achieving both context awareness and lower costs, and the framework was evaluated across four LLMs and three arithmetic datasets. A side-by-side comparison of the three methods is shown in Table 2. Another study where Reinforcement Learning with AI feedback is used to ensure harmlessness of AI assistants – this time by using RL for prompt design - is the study of Perez et al. [96], who used a Language Model to generate test questions(“redteaming”)thataimtoelicitharmfulresponsesfromthetargetLM.Then,aclassifierthatistrainedto detectoffensivecontentisusedtoevaluatethetargetLM’srepliestothosequestions. Contrarytostudiesinprevious sections, thisstudy uses RLto train thered-teaming LLM, insteadof fine-tuning orprompting the targetLLM. More precisely,startingfromapretrainedLMforredteaming,theauthorsperformafirstpassoffine-tuningusingsupervised learning. Then, they use RL to train the red-teaming LLM in a synchronous advantage actor-critic (A2C) framework with the objective of maximizing the expected harmfulness. The reward function is a linear combination of the A2C loss and the K-L divergence penalty between the target policy and the distribution of the initialization over the next tokens. The authors performed red teaming for a variety of harmful behaviors, including offensive language, data leakage, personal contact information generation, and distributional bias of the output text. LM-based red teaming was shown to be a promising tool for the timely identification of potentially harmful behavior, with RL-based red teaming being the most effective at eliciting offensive replies compared to the other methods, which included zero-shot and stoch
tion of the initialization over the next tokens. The authors performed red teaming for a variety of harmful behaviors, including offensive language, data leakage, personal contact information generation, and distributional bias of the output text. LM-based red teaming was shown to be a promising tool for the timely identification of potentially harmful behavior, with RL-based red teaming being the most effective at eliciting offensive replies compared to the other methods, which included zero-shot and stochastic few-shot generation, and supervised learning. 5 LLM4RL: Enhancing Reinforcement Learning Agents through Large Language Models TheLLM4RL classcoversstudies wherean LLMacts as acomponent ofan RLtraining pipeline. ContrarytoRL4LLM studies (section 4, where the end goal is an NLP task, the RL agent in this category is trained for tasks which are generallynotrelatedtonaturallanguage. Overall,themotivationbehindstudiesintheLLM4RLcategoryistwofold,as shown on Table 4. 9RL/LLM Taxonomy Tree Table 2: RL4LLM-Prompt Studies Dimensions TEMPERA [145] RLPROMPT[36] Prompt-OIRL [115] LM Model Roberta-large Roberta-large GPT-3.5, TigerBot- 13B-chat, Llama2- 7B-chat Assumptions on Not a black-box Black box model Black box model LLM model, access to the withnoaccesstogra- withnoaccesstogra- hidden states dient dient Policy Model GPTencoderwithat- Frozen distilGPT-2 Choose one of the K tention over all pos- (82M) with one prompts, (CoT, ToT, sible actions MLP layer on top etc.) (tunable) Action Space Discrete actions, Tokens from the pol- Applyaprompttoan e.g., swap, add or icy models vocabu- input query delete tokens lary RL Algorithm PPO Soft Q-learning Offline Inverse RL Applications Only Text Classifica- Text Classification Arithmetic reason- tion and Text generation ing Table 3: RL4LLM Natural Language Processing Application LLM4RL Study Application Ouyang et al. [90] Generation,OpenandClosedQuestion-Answering,Brainstorming,Chat,Rewrite,Sum- marization, Classification, Extraction Bai et al. [8] Dialogue (AI Assistant) Hu et al. [57] Question-Answering Bai et al. [9] AI assistant Ramamurthy et al. [105] GRUE Benchmarktasks: TextContinuation, GenerativeCommonsense, Summarization, Data to Text, Machine Translation, Question-Answering, Chitchat Dialogue Ghalandari et al. [49] Sentence Compression(Sentence summarization, Textsimplification, Headline genera- tion) Zhang et al. [145] SentimentAnalysis,Top
Question-Answering Bai et al. [9] AI assistant Ramamurthy et al. [105] GRUE Benchmarktasks: TextContinuation, GenerativeCommonsense, Summarization, Data to Text, Machine Translation, Question-Answering, Chitchat Dialogue Ghalandari et al. [49] Sentence Compression(Sentence summarization, Textsimplification, Headline genera- tion) Zhang et al. [145] SentimentAnalysis,TopicClassification,NaturalLanguageInference,ReadingCompre- hension Deng et al. [36] Few-Shot Text Classification, Unsupervised Text Style Transfer Sun [115] Arithmetic reasoning (MultiArith [108], GSM8K [33], SVAM [93]) Perez et al. [96] Question-Answering 10RL/LLM Taxonomy Tree 1. Improved performance of the RL Agent: In LLM4RL frameworks, improving the performance of the agent requires alignment with human intent or feedback [56, 113, 75, 67, 138], grounding of the agent to its environment [138, 22] or learning learning complex tasks [34, 75] 2. EfficienttrainingoftheRLAgent: TraininganRLagentcanbecomputationallyintensive,requiringnotonly significantcomputationalresources,butalsolargeamountsofdata. Evenwiththoseprerequisitesavailable, RL training might still suffer due to inefficient sampling, especially for complex, long-term tasks. Therefore, LLM4RLframeworksalsofocusonimprovingtrainingefficiency–andthereforeensuringsatisfyingexecution ofthe targettasksat testtime– byfacilitatingexploration [ 101,41],policytransferoftrained models[106] and effective planning for reduced data requirements [34]. The LLM replacesor assists in different ways one of the fundamentalcomponents of the reinforcement learningagent - namely, the reward function, the training goal, or the policy function. Using the corresponding component in each case as a criterion, we further break down the LLM4RL class in three sub-categories, where the LLM is used for a) determiningtherewardfunction(LLM4RL-Reward),b)expressinginternalgoals(LLM4RL-Goal),andc)pretraining, representing, or updating the policy function (LLM4RL-Policy). 5.1 LLM4RL-Reward As noted by[118], “theuseof arewardsignalto formalizethe ideaof agoal isone ofthe mostdistinctive featuresof reinforcement learning”. The reward signal received throughthe interaction withthe environment iscritical for training anagenttoachievethedesiredbehavior. Untilrecently,thebulkofRLresearchtreatedtherewardfunctionasgiven andfocusedonthetrainingalgorithmsthemselves[42]. DesigningtherewardfunctionofanRLtrainingframeworkis straightforwardwhendirectknowledgeoftheproblemisavailable,aswhensolvingawell-definedproblemorearninga highscoreinawell-definedgame. CommonexamplesinthiscategoryareAtarigames,whicharefrequentlyutilizedas sandboxenvironmentsfortestingvariousaspectsofRLtraining,orgameswheretheagentreceivesapositiverewardif they win,and negative reward otherwise. However, there exists a significant numberof applications where itis difficult to directly translatethe desired behavior into rewards signals, especially for longand complex tasks or whenthe agent can discover unexpected ways, and potentially dangerous, ways to generate reward from the environment. When theagent mustlearn toperform a longand possiblycomplex task,where designing aproper reward function not straightforward, and where human feedback is critical, it is common to rely on expert demonstrations or interactive modificationof therewardsignal. Expert demonstrationsgenerally utilizeona techniqueknownas InverseReinforce- ment Learning to infer the reward function by observing the desired behavior [2], with a reward designer observing theagent’sperformanceandtweakingtherewardsignalduringatrial-and-errorprocesstoasjusttheagent’sbehavior accordingly. Motivated by the direct involvement of humans
ction not straightforward, and where human feedback is critical, it is common to rely on expert demonstrations or interactive modificationof therewardsignal. Expert demonstrationsgenerally utilizeona techniqueknownas InverseReinforce- ment Learning to infer the reward function by observing the desired behavior [2], with a reward designer observing theagent’sperformanceandtweakingtherewardsignalduringatrial-and-errorprocesstoasjusttheagent’sbehavior accordingly. Motivated by the direct involvement of humans in this interactive loop, combined with the ability of LLMs to learn in-context from few or even zero examples [16], researchers are exploring ways to bridge the gap between human preference andagent behavior incases where theexplicitquantification of rewards is difficult ortime-consuming. In this context,the LLM is usedeither for reward shaping,i.e., i.e., guiding thelearning agent with additionalrewards to preservepolicyoptimality,orasaproxyrewardfunction. PriortotheLLMera,[50]usedlanguageforrewardshaping: They trained a model to predict if the actions in a trajectory match some specific language description and used the output to generate intermediate RL rewards. Similarly, [21] extended the underlying Markov Decision Process by includinganaturallanguageinstruction;theauthorsfirstgeneratedinstructionsandobtainedthecorrespondingword embeddings using BERT, and then trained an alignment model that maps action trajectories to their corresponding instructions. TheyfoundthataugmentingthedefaultrewardofanAtarienvironmentwiththelanguage-basedreward significantly improves the performance of the agent. ThefirststudyutilizingLLMsforRLagentrewarddesignistheonebyKwonetal.[67],whoevaluatedwhetherLLMs canproducerewardsignalsthatareconsistentwiththeuserbehaviorusingGPT-3[17]asaproxyrewardfunctioninan RLframework. Atthebeginningoftraining,theuserspecifiesthedesiredbehaviorusingapromptwithexplanation and an example of the desired behavior, while during training, the LLM evaluates the agent’s behavior against the behavior described in the prompt and generates a reward signal accordingly, which the RL agent uses to update its behavior. Inmoredetail,theproposedframeworkisasfollows: TheLLMisprovidedwithataskdescription,auser’s description of the objective, a string-formatted episode outcome, and a question asking if the outcome satisfies the objective. The LLM’s response to the question is used as reward signal for the agent. Based on this reward signal, the agent updates their weights and generates a new episode, the outcome of which is parsed back into a string, and the episodecontinues. Toevaluate theirframework,the authorscomparedit tothreebaseline cases: a)afew-shot baseline, where asupervised learningmodel istrained topredict reward signalsusing thesame examplesgivento theLLM, b) a zero-shot baseline, where the LLM is prompted without the user’s description of the objective, and c) a baseline where theagentsaretrainedwithgroundtruthrewardfunctions. ExperimentalresultsshowedthattheproposedRLtraining 11RL/LLM Taxonomy Tree framework - which is agnostic to the RL algorithm used - can achieve user objective-aligned behavior, as measured bothwithautomatedmetricsandwithhumanusers. Inaddition,theperformanceoftheagentsisshowntooutperform agentstrainedwithrewardfunctionslearnedviasupervisedlearning,evenwhennoexampleswereprovided-aslong as the objective is well-defined - or when the tasks were complicated. Grounding a robotic agent to the environment and achieving the desirable behavior as directed through human feedbackisthefocusof TEXT2REWARDframework by Xieetal. [138]. TEXT2REWARDallowsforthegeneration and continuous improvement of python code that expresses dense rewards functions for robotic agents performing manipulation tasks. The framework is composed of three stages: Abstraction, Instruction, and Feedback. In the Abstraction stage, an expert provides an abstract representation of the robot’s environment using Python classes. In the Instruction stage, a user provides
ble behavior as directed through human feedbackisthefocusof TEXT2REWARDframework by Xieetal. [138]. TEXT2REWARDallowsforthegeneration and continuous improvement of python code that expresses dense rewards functions for robotic agents performing manipulation tasks. The framework is composed of three stages: Abstraction, Instruction, and Feedback. In the Abstraction stage, an expert provides an abstract representation of the robot’s environment using Python classes. In the Instruction stage, a user provides a natural language description of the goal to be achieved by the robot (e.g., “push the chair to the marked position”). Finally, in the Feedback phase, the user summarizes their preferences or the failure mode of the robot’s action. This summary is then used to update the reward function and retrain the RL agent. The authors evaluated their framework on two robotic manipulation benchmarks - MANISKILL2 [52] and METAWORLD [141] - and two MUJOCO locomotion environments [15]. For manipulation tasks, the experiments demonstratedcomparable resultsto humanoracle-designed rewards interms ofperformance andconvergence speed, with the performance improvement verified through few-shot examples. For locomotion tasks, the agent was able to successfully learn six new behaviors(move forward, front flip,back flip,etc.) with arate ofsuccess rangingfrom 94% to100%foreachtask. Finally,erroranalysisrevealedthatthegeneratedcodewascorrectat90%ofthetime,withmost common errorsoriginating fromwrong use ofclass attributed (wronguse orhallucination of non-existentattributes), syntax errors or shape mismatch, or wrong imports. The success of TEXT2REWARD was largely owed to the use of humanfeedbacktoresolveambiguitythroughprovidingclear,language-basedinstructionstocorrectthebehaviorofthe robot. Asidefrom groundingandalignment tohumanpreferences, TEXT2REWARDhas theadvantageof generating of highly interpretable functions, while requiring any data for reward training. TheEUREKAframeworkbyMaetal.[75]isanotherexampleofrewarddesignthroughdirectpythoncodegeneration. EUREKA consists of three fundamental components: the use of environment as context, evolutionary search, and reward reflection. The environment code, excluding the part of it that defines the reward, is directly provided to the LLM,whichinturnextractsitssemanticstocomposeasuitablerewardfunctionforthetargettask. TheLLMoutputs the reward code, following some general formatting instructions. Evolutionary computation is used to overcome sub-optimalor non-executable codeby sequentiallygenerating improved reward functionsusing theconcepts ofreward mutation, which modifies previous solutions based on textual feedback, and random restarts, to escape local optima. Finally,rewardreflection actsasasupplementto thenumericvalueoftherewardexpressed throughthefitnessfunction byexplainingwhyacandidaterewardfunctionworksordoesnotworkandassigningcreditaccordingly. Theframework follows a PPO architecture and is tested on a variety of benchmarking environments (e.g., Cartpole and BallBalance), whereitachievesahighersuccessratecomparedtohuman-specifiedrewards. Theuseofevolutionarycomputationis showntobenecessaryforcontinuousincreaseinthesuccessrateoftheframeworkovertime,whilealsoallowingforthe generation of more diverse and often counter-intuitive rewards thatoutperform human-designed rewards, particularly for difficulttasks. Theauthors alsoimplemented acurriculum learning[13] approachto teacha Shadow Handto rotate apenaccordingtoasetofpre-definedspinningpatterns,thusdemonstratingthecapabilityoftheframeworktoexecute complex, low-level tasks. By successfully handling task complexity and allowing for the discovery of unexpected high-performingpolicies,EUREKAsuccessfullydealswithtwokeyreasonsthatinhibitthetranslationofthedesired agent behavior to rewards that were identified in subsection 5.1. Finally, similarly to the LLM4RL-Reward studies discussed in5.1, akey benefitof EUREKA isthe alignmentof rewardsto human preferencesby incorporatinghuman knowledge about the state through appropriate initialization of the re