arxiv_id
stringlengths 9
12
| abstract
stringlengths 431
13.8k
|
---|---|
2212.10544 | Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs) . Our proposed model, Bidirectional Gated SSM (BiGS) , combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures.
The model learns static layers that do not consider pair-wise interactions.
Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar average accuracy, the approach has different inductive biases than BERT in terms of interactions and syntactic representations. |
2305.06500 | Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence.
However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input.
Although vision-language pretraining has been widely studied,
vision-language instruction tuning remains under-explored.
In this paper,
we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models.
We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format.
Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction.
Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models.
Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts) .
Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
All InstructBLIP models are open-sourced. |
2305.05176 | There is a rapidly growing number of large language models (LLMs) that users can query for a fee. We review the cost associated with querying popular LLM APIs—e.g. GPT-4, ChatGPT, J1-Jumbo—and find that these models have heterogeneous pricing structures, with fees that can differ by two orders of magnitude. In particular, using LLMs on large collections of queries and text can be expensive. Motivated by this, we outline and discuss three types of strategies that users can exploit to reduce the inference cost associated with using LLMs: 1) prompt adaptation, 2) LLM approximation, and 3) LLM cascade. As an example, we propose FrugalGPT, a simple yet flexible instantiation of LLM cascade which learns which combinations of LLMs to use for different queries in order to reduce cost and improve accuracy. Our experiments show that FrugalGPT can match the performance of the best individual LLM (e.g. GPT-4) with up to 98% cost reduction or improve the accuracy over GPT-4 by 4% with the same cost. The ideas and findings presented here lay a foundation for using LLMs sustainably and efficiently. |
2304.12244 | Training large language models (LLMs) with open-domain instruction following data brings colossal success. However, manually creating such instruction data is very time-consuming and labor-intensive. Moreover, humans may struggle to produce high-complexity instructions. In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans. Starting with an initial set of instructions, we use our proposedEvol-Instructto rewrite them step by step into more complex instructions. Then, we mix all generated instruction data to fine-tune LLaMA. We call the resulting modelWizardLM. Human evaluations on a complexity-balanced test bed and Vicuna’s testset show that instructions fromEvol-Instructare superior to human-created ones. By analyzing the human evaluation results of the high complexity part, we demonstrate that outputs from ourWizardLMmodel are preferred to outputs from OpenAI ChatGPT. In GPT-4 automatic evaluation,WizardLMachieves more than 90% capacity of ChatGPT on 17 out of 29 skills. Even thoughWizardLMstill lags behind ChatGPT in some aspects, our findings suggest that fine-tuning with AI-evolved instructions is a promising direction for enhancing LLMs. Our code and data are public athttps://github.com/nlpxucan/WizardLM. |
2305.06300 | The ever-increasing size of language models curtails their widespread availability to the community, thereby galvanizing many companies into offering access to large language models through APIs.
One particular type, suitable for dense retrieval, is a semantic embedding service that builds vector representations of input text.
With a growing number of publicly available APIs, our goal in this paper is to analyze existing offerings in realistic retrieval scenarios, to assist practitioners and researchers in finding suitable services according to their needs.
Specifically, we investigate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval.
For this purpose, we evaluate these services on two standard benchmarks, BEIR and MIRACL.
We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English, in contrast to the standard practice of employing them as first-stage retrievers.
For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost.
We hope our work lays the groundwork for evaluating semantic embedding APIs that are critical in search and more broadly, for information access. |
2305.03047 | Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback (RLHF) to align the output of large language models (LLMs) with human intentions, ensuring they are helpful, ethical, and reliable. However, this dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision and the related issues on quality, reliability, diversity, self-consistency, and undesirable biases. To address these challenges, we propose a novel approach calledSelf-Align, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of the AI agents with minimal human supervision. ApplyingSelf-Alignto theLLaMA-65bbase language model, we develop an AI assistant namedDromedary. With fewer than300 lines of human annotations(includingseed prompts, 16 generic principles, and 5 exemplars for in-context learning) ,Dromedarysignificantly surpasses the performance of several state-of-the-art AI systems, includingText-Davinci-003andAlpaca, on benchmark datasets with various settings.
We have open-sourced the code, LoRA weights ofDromedary, and our synthetic training data to encourage further research into aligning LLM-based AI agents with enhanced supervision efficiency, reduced biases, and improved controllability. |
2305.03514 | Large Language Models (LLMs) are capable of successfully performing many language processing tasks zero-shot (without training data) . If zero-shot LLMs can also reliably classify and explain social phenomena like persuasiveness and political ideology, then LLMs could augment the Computational Social Science (CSS) pipeline in important ways. This work provides a road map for using LLMs as CSS tools. Towards this end, we contribute a set of prompting best practices and an extensive evaluation pipeline to measure the zero-shot performance of 13 language models on 25 representative English CSS benchmarks. On taxonomic labeling tasks (classification) , LLMs fail to outperform the best fine-tuned models but still achieve fair levels of agreement with humans. On free-form coding tasks (generation) , LLMs produce explanations that oftenexceedthe quality of crowdworkers’ gold references. We conclude that the performance of today’s LLMs can augment the CSS research pipeline in two ways: (1) serving as zero-shot data annotators on human annotation teams, and (2) bootstrapping challenging creative generation tasks (e.g., explaining the underlying attributes of a text) . In summary, LLMs are posed to meaningfully participate in social science analysis in partnership with humans. |
1801.06146 | Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT) , an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with onlylabeled examples, it matches the performance of training from scratch onmore data. We open-source our pretrained models and code222http://nlp.fast.ai/ulmfit.. |
2107.14795 | A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible. Current architectures, however, cannot be applied beyond a small set of stereotyped settings, as they bake in domain & task assumptions or scale poorly to large inputs or outputs. In this work, we propose Perceiver IO, a general-purpose architecture that handles data from arbitrary settings while scaling linearly with the size of inputs and outputs. Our model augments the Perceiver with a flexible querying mechanism that enables outputs of various sizes and semantics, doing away with the need for task-specific architecture engineering.
The same architecture achieves strong results on tasks spanning natural language and visual understanding, multi-task and multi-modal reasoning, and StarCraft\Romannum2. As highlights, Perceiver IO outperforms a Transformer-based BERT baseline on the GLUE language benchmark despite removing input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation with no explicit mechanisms for multiscale correspondence. |
2211.04236 | Can continuous diffusion models bring the same performance breakthrough on natural language they did for image generation?
To circumvent the discrete nature of text data, we can simply project tokens in a continuous space of embeddings, as is standard in language modeling.
We propose Self-conditioned Embedding Diffusion (Sed) , a continuous diffusion mechanism that operates on token embeddings and allows to learn flexible and scalable diffusion models for both conditional and unconditional text generation.
Through qualitative and quantitative evaluation, we show that our text diffusion models generate samples comparable with those produced by standard autoregressive language models — while being in theory more efficient on accelerator hardware at inference time.
Our work paves the way for scaling up diffusion models for text, similarly to autoregressive models, and for improving performance with recent refinements to continuous diffusion. |
2302.14017 | Recent advances in state-of-the-art neural network architecture design have been moving toward Transformer models. These models achieve superior accuracy across a wide range of applications in computer vision, natural language processing, and speech recognition. This trend has been consistent over the past several years since Transformer models were originally introduced. However, the amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate, and this has made their deployment in latency-sensitive applications challenging. As such, there has been an increased focus on making Transformer models more efficient, with methods that range from changing the architecture design, all the way to developing dedicated domain-specific accelerators. In this work, we survey different approaches for efficient Transformer inference, including: (i) analysis and profiling of the bottlenecks in existing Transformer architectures and their similarities and differences with previous convolutional models; (ii) implications of Transformer architecture on hardware, including the impact of non-linear operations such as Layer Normalization, Softmax, and GELU, as well as linear operations, on hardware design; (iii) approaches for optimizing a fixed Transformer architecture; (iv) challenges in finding the right mapping and scheduling of operations for Transformer models; and (v) approaches for optimizing Transformer models by adapting the architecture using neural architecture search. Finally, we perform a case study by applying the surveyed optimizations on Gemmini, the open-source, full-stack deep neural network accelerator generator, and we show how each of these approaches can yield improvements, compared to previous benchmark results on Gemmini. Among other things, we find that a full-stack co-design approach with the aforementioned methods can result in up to 88.7× speedup with a minimal performance degradation for Transformer inference. ∗Equal contribution. | hngenc@berkeley.edu UC Berkeley | |-----------------------------------| |
2302.01318 | We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short continuations, generated by a faster but less powerful draft model, is comparable to that of sampling a single token from the larger target model. This is combined with a novel modified rejection sampling scheme which preserves the distribution of the target model within hardware numerics. We benchmark speculative sampling with Chinchilla, a 70 billion parameter language model, achieving a–decoding speedup in a distributed setup, without compromising the sample quality or making modifications to the model itself. |
1705.08045 | There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly. |
2305.04091 | Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks.
To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting
includes a few manually crafted step-by-step reasoning demonstrations which
enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy.
To eliminate the manual effort, Zero-shot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs.
Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors.
To address the missing-step errors, we propose Plan-and-Solve (PS) Prompting.
It consists of two components: first, devising a plan to divide the entire task into smaller subtasks, and then carrying out the subtasks according to the plan.
To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three reasoning problems.
The experimental results over GPT-3 show that our proposed zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought Prompting, and has comparable performance with 8-shot CoT prompting on the math reasoning problem. The code can be found athttps://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting. |
2210.13966v3 | We survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to understand language—and the physical and social situations language encodes—in any humanlike sense. We describe arguments that have been made for and against such understanding, and key questions for the broader sciences of intelligence that have arisen in light of these arguments. We contend that an extended science of intelligence can be developed that will provide insight into distinct modes of understanding, their strengths and limitations, and the challenge of integrating diverse forms of cognition. |
2211.09800 | We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image.
To obtain training data for this problem, we combine the knowledge of two large pretrained models—a language model (GPT-3) and a text-to-image model (Stable Diffusion) —to generate a large dataset of image editing examples.
Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per-example fine-tuning or inversion, our model edits images quickly, in a matter of seconds.
We show compelling editing results for a diverse collection of input images and written instructions. |
2209.01714 | Over a five-year period, computing methods for generating high-fidelity, fictional depictions of people and events moved from exotic demonstrations by computer science research teams into ongoing use as a tool of disinformation. The methods, referred to with the portmanteau of “deepfakes," have been used to create compelling audiovisual content. Here, I share challenges ahead with malevolent uses of two classes of deepfakes that we can expect to come into practice with costly implications for society:interactiveandcompositionaldeepfakes. Interactive deepfakes have the capability to impersonate people with realistic interactive behaviors, taking advantage of advances in multimodal interaction. Compositional deepfakes leverage synthetic content in larger disinformationplansthat integrate sets of deepfakes over time with observed, expected, and engineered world events to create persuasivesynthetic histories. Synthetic histories can be constructed manually but may one day be guided byadversarial generative explanation(AGE) techniques. In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction. I shall describe interactive and compositional deepfakes and reflect about cautions and potential mitigations to defend against them. |
2207.13825 | We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology. We find that AI’s impact on phishing may be overestimated but could lead to more attacks going undetected. Advances in vulnerability discovery have the potential to help attackers more than defenders. And automation that writes exploits is more useful to attackers than automation that writes patches, although advances that help deploy patches faster have the potential to be more impactful than either. |
2108.12409 | Since the introduction of the transformer model by, a fundamental question has yet to be answered: how does a model achieve extrapolation at inference time for sequences that are longer than it saw during training?
We first show that extrapolation can be enabled by simply changing the position representation method, though we find that current methods do not allow forefficientextrapolation.
We therefore introduce a simpler and more efficient position method, Attention with Linear Biases (ALiBi) . ALiBi does not add positional embeddings to word embeddings; instead, it biases query-key attention scores with a penalty that is proportional to their distance. We show that this method trains a 1.3 billion parameter model on input sequences of length 1024 that extrapolates to input sequences of length 2048, achieving the same perplexity as a sinusoidal position embedding model trained on inputs of length 2048 but training 11% faster and using 11% less memory.
ALiBi’s inductive bias towards recency also leads it to outperform multiple strong position methods on the WikiText-103 benchmark.111Code & models:https://github.com/ofirpress/attention_with_linear_biases |
1909.08593 | Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions.
Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics. |
2211.01562 | Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters.
To make this reasoning process more explicit, recent works retrieve a rationalizing LM’s internal knowledge by training or prompting it to generate free-text rationales, which can be used to guide task predictions made by either the same LM or a separate reasoning LM.
However, rationalizing LMs require expensive rationale annotation and/or computation,
without any assurance that their generated rationales improve LM task performance or faithfully reflect LM decision-making.
In this paper, we proposePINTO, an LM pipeline thatrationalizesvia prompt-based learning, and learns tofaithfully reason over rationalesvia counterfactual regularization.
First,PINTOmaps out a suitable reasoning process for the task input by prompting a frozen rationalizing LM to generate a free-text rationale.
Second,PINTO’s reasoning LM is fine-tuned to solve the task using the generated rationale as context, while regularized to output less confident predictions when the rationale is perturbed.
Across four datasets, we show thatPINTOsignificantly improves the generalization ability of the reasoning LM, yielding higher performance on both in-distribution and out-of-distribution test sets.
Also, we find thatPINTO’s rationales are more faithful to its task predictions than those generated by competitive baselines.111Code and data used in our experiments can be found athttps://github.com/wangpf3/pinto-faithful-language-reasoning. |
2304.15004 | Recent work claims that large language models displayemergent abilities, abilities not present in smaller-scale models that are present in larger-scale models.
What makes emergent abilities intriguing is two-fold: theirsharpness, transitioning seemingly instantaneously from not present to present, and theirunpredictability, appearing at seemingly unforeseeable model scales.
Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance.
We present our alternative explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities, (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show how to choose metrics to produce never-before-seen seemingly emergent abilities in multiple vision tasks across diverse deep networks.
Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models. |
2304.14293 | Large language models can be prompted to produce fluent output for a wide range of taskswithoutbeing specifically trained to do so.
Nevertheless, it is notoriously difficult to control their generation in such a way that it satisfies user-specified constraints.
In this paper, we presentInstructCTG, a simple controlled text generation framework that incorporates different constraints by verbalizing them as natural language instructions.
We annotate natural texts through a combination of off-the-shelf NLP tools and simple heuristics with the linguistic and extra-linguistic constraints they satisfy.
Then, we verbalize the constraints into natural language instructions to form weakly supervised training data, i.e., we prepend the natural language verbalizations of the constraints in front of their corresponding natural language sentences.
Next, we fine-tune a pre-trained language model on the augmented corpus.
Compared to existing methods,InstructCTGis more flexible in terms of the types of constraints it allows the practitioner to use.
It also does not require any modification of the decoding procedure.
Finally,InstructCTGallows the model to adapt to new constraints without re-training through the use of in-context learning.
Our code is available athttps://github.com/MichaelZhouwang/InstructCTG. |
2210.07229 | Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up tothousands of associationsfor GPT-J (6B) and GPT-NeoX (20B) , exceeding prior work by orders of magnitude. Our code and data are atmemit.baulab.info. |
2304.14767 | Transformer-based language models (LMs) are known to capture factual knowledge in their parameters.
While previous work looked intowherefactual associations are stored, only little is known abouthowthey are retrieved internally during inference.
We investigate this question through the lens of information flow.
Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute.
With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions.
Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction.
First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation “queries” the enriched subject to extract the attribute.
Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters.
Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.111Our code is publicly available athttps://github.com/google-research/google-research/tree/master/dissecting_factual_predictions |
2304.13712 | This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data.
Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks.
We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios.
We also try to understand the importance of data and the specific challenges associated with each NLP task.
Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency,
to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found athttps://github.com/Mooler0410/LLMsPracticalGuide. |
2305.01625 | Since the proposal of transformers, these models have been limited to bounded input lengths, because of their need to attend to every token in the input.
In this work, we propose Unlimiformer: a general approach that wraps any existing pretrained encoder-decoder transformer, and offloads the cross-attention computation
to a single-nearest-neighbor (NN) index, while the returnedNN distances are the attention dot-product scores.
ThisNN index can be kept on either the GPU or CPU memory and queried in sub-linear time;
this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-keys, instead of attending to every key.
We
evaluate Unlimiformer on several long-document and book-summarization benchmarks, showing that it can
process even 500k token-long inputs from the BookSum dataset, without any input truncation at test time. We demonstrate that Unlimiformer improves pretrained models such as BARTand Longformerby extending them to unlimited inputs without additional learned weights and without modifying their code.
Our code and models are publicly available, and support LLaMA-2 as well111https://github.com/abertsch72/unlimiformer. |
2305.02309 | Large language models (LLMs) have demonstrated remarkable abilities in representation learning for program synthesis and understanding tasks. The quality of the learned representations appears to be dictated by the neural scaling laws as a function of the number of model parameters and observations, while imposing upper bounds on the model performance by the amount of available data and compute, which is costly. In this study, we attempt to render the training of LLMs for program synthesis more efficient by unifying four key components: (1) model architectures, (2) learning methods, (3) infill sampling, and, (4) data distributions. Specifically, for the model architecture, we attempt to unify encoder and decoder-based models into a single prefix-LM. For learning methods, (i) causal language modeling, (ii) span corruption, (iii) infilling are unified into a simple learning algorithm. For infill sampling, we explore the claim of a ”free lunch” hypothesis. For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored. We conduct a comprehensive series of empirical experiments on 1B LLMs, for which failures and successes of this exploration are distilled into five lessons. We will provide a final recipe for training and release CodeGen2 models in size 1B, 3.7B, 7B, and, 16B parameters, along with the training framework as open-source:https://github.com/salesforce/CodeGen. |
2304.09823 | As a phenomenal large language model, ChatGPT has achieved unparalleled success in various real-world tasks and increasingly plays an important role in our daily lives and work. However, extensive concerns are also raised about the potential ethical issues, especially about whether ChatGPT-like artificial general intelligence (AGI) will replace human jobs. To this end, in this paper, we introduce a preliminary data-driven study on the future of ChatGPT-enabled labor market from the view of Human-AI Symbiosis instead of Human-AI Confrontation. To be specific, we first conduct an in-depth analysis of large-scale job posting data in BOSS Zhipin, the largest online recruitment platform in China. The results indicate that about 28% of occupations in the current labor market require ChatGPT-related skills. Furthermore, based on a large-scale occupation-centered knowledge graph, we develop a semantic information enhanced collaborative filtering algorithm to predict the future occupation-skill relations in the labor market. As a result, we find that additional 45% occupations in the future will require ChatGPT-related skills. In particular, industries related to technology, products, and operations are expected to have higher proficiency requirements for ChatGPT-related skills, while the manufacturing, services, education, and health science related industries will have lower requirements for ChatGPT-related skills. |
2305.02301 | Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduceDistilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for training small models within a multi-task framework. We present three findings acrossNLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M T5 model outperforms the few-shot prompted 540B PaLM model using onlyof available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by usingof the dataset.111Source code is available at:https://github.com/google-research/distilling-step-by-step. |
2205.12674v3 | Recent work has improved language models (LMs) remarkably by equipping them with a non-parametric memory component.
However, most existing approaches only introduce memories at testing time or represent them using a separately trained encoder, resulting in suboptimal training of the language model.
In this work, we presentTrime, a novel yet simple training approach designed for training LMs with memory augmentation. Our approach uses a training objective that directly takes in-batch examples as accessible memory. We also present new methods for memory construction and data batching, which are used for adapting to different sets of memories—local, long-term, and external memory—at testing time.
We evaluateTrimeon multiple language modeling and machine translation benchmarks and show that it is able to achieve significant improvements across all the settings. Concretely,Trimereduces the perplexity from 18.70 to 15.37 onWikiText-103, by effectively leveraging a large memory set from the training corpus. Compared to standard LM training,Trimeadds negligible computational overhead and is compatible with different neural architectures, making it a versatile solution for training memory-augmented LMs.111Our code and pre-trained models are publicly available athttps://github.com/princeton-nlp/TRIME. |
2206.05802 | We fine-tune large language models to write natural language critiques (natural language critical comments) using behavioral cloning. On a topic-based summarization task, critiques written by our models help humans find flaws in summaries that they would have otherwise missed. Our models help find naturally occurring flaws in both model and human written summaries, and intentional flaws in summaries written by humans to be deliberately misleading. We study scaling properties of critiquing with both topic-based summarization and synthetic tasks. Larger models write more helpful critiques, and on most tasks, are better at self-critiquing, despite having harder-to-critique outputs. Larger models can also integrate their own selfcritiques as feedback, refining their own summaries into better ones. Finally, we motivate and introduce a framework for comparing critiquing ability to generation and discrimination ability. Our measurements suggest that even large models may still have relevant knowledge they cannot or do not articulate as critiques. These results are a proof of concept for using AI-assisted human feedback to scale the supervision of machine learning systems to tasks that are difficult for humans to evaluate directly. We release our training datasets, as well as samples from our critique assistance experiments. |
2101.03961 | In deep learning, models typically reuse the same parameters for all inputs.
Mixture of Experts (MoE) models defy this and instead selectdifferentparameters for each incoming example.
The result is a sparsely-activated model—with an outrageous number of parameters—but a constant computational cost.
However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs, and training instability.
We address these with the introduction of the Switch Transformer.
We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs.
Our proposed training techniques mitigate the instabilities, and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats.
We design models based off T5-Base and T5-Largeto obtain up to 7x increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages.
Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.111JAX code for Switch Transformer and all model checkpoints are available athttps://github.com/google-research/t5x222Tensorflow code for Switch Transformer is available athttps://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/moe.py |
2201.08239 | We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text.
While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding.
The first challenge, safety, involves ensuring that the model’s responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency. |
1801.10198 | We show that generating English Wikipedia articles can be approached as a multi-document
summarization of source documents.
We use extractive summarization to coarsely identify salient information
and a neural abstractive model to
generate the article.
For the abstractive model, we introduce a decoder-only architecture that can
scalably attend to very long sequences,
much longer than typical encoder-decoder architectures
used in sequence transduction.
We show that this model can generate fluent, coherent multi-sentence
paragraphs and even whole Wikipedia articles.
When given reference documents, we show it can extract relevant
factual information
as reflected in perplexity, ROUGE scores and human evaluations. |
2209.14500 | Large language models such as GPT-3 (Brown et al., 2020) can perform arbitrary tasks without undergoing fine-tuning after being prompted with only a few labeled examples. An arbitrary task can be reformulated as a natural language prompt, and a language model can be asked to generate the completion, indirectly performing the task in a paradigm known as prompt-based learning. To date, emergent prompt-based learning capabilities have mainly been demonstrated for unidirectional language models. However, bidirectional language models pre-trained on denoising objectives such as masked language modeling produce stronger learned representations for transfer learning. This motivates the possibility of prompting bidirectional models, but their pre-training objectives have made them largely incompatible with the existing prompting paradigm. We present SAP (Sequential Autoregressive Prompting), a technique that enables the prompting of bidirectional models. Utilizing the machine translation task as a case study, we prompt the bidirectional mT5 model (Xue et al., 2021) with SAP and demonstrate its fewshot and zero-shot translations outperform the few-shot translations of unidirectional models like GPT-3 and XGLM (Lin et al., 2021), despite mT5's approximately 50% fewer parameters. We further show SAP is effective on question answering and summarization. For the first time, our results demonstrate promptbased learning is an emergent property of a broader class of language models, rather than only unidirectional models. |
2211.09110 | Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood.
We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models.
First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs.
Then we select a broad subset based on coverage and feasibility, noting what’s missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness) .
Second, we adopt a multi-metric approach:
We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency)
foreachof 16 core scenarios
to the extent possible (87.5% of the time) , ensuring that metrics beyond accuracy don’t fall to the wayside, and that trade-offs across models and metrics are clearly exposed.
We also perform 7 targeted evaluations, based on 26 targeted scenarios, to more deeply analyze specific aspects (e.g. knowledge, reasoning, memorization/copyright, disinformation) .
Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, including 21 scenarios that were not previously used in mainstream LM evaluation.
Prior to HELM,
models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common.
We improve this to 96.0%: now all 30 models have been densely benchmarked on a set of core scenarios and metrics under standardized conditions.
Our evaluation surfaces 25 top-level findings concerning the interplay between different scenarios, metrics, and models.
For full transparency, we release all raw model prompts and completions publicly111https://crfm.stanford.edu/helm/v0.1.0for further analysis, as well as a general modular toolkit for easily adding new scenarios, models, metrics, and prompting strategies.222https://github.com/stanford-crfm/helmWe intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models. |
2302.07388 | Pretrained large language models have become indispensable for solving various natural language processing (NLP) tasks.
However, safely deploying them in real world applications is challenging because they generate toxic content.
To address this challenge, we propose two novel pretraining data augmentation strategies thatsignificantly reduce model toxicity without compromising its utility.
Our two strategies are: (1) MEDA: adds raw toxicity score as meta-data to the pretraining samples, and (2) INST: adds instructions to those samples indicating their toxicity.
Our results indicate that our best performing strategy (INST) substantially reduces the toxicity probability up to% while preserving the accuracy on five benchmark NLP tasks as well as improving AUC scores on four bias detection tasks by%.
We also demonstrate the generalizability of our techniques by scaling the number of training samples and the number of model parameters. |
2302.14691 | In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.TAPPis different from canonical prompts for LLMs in that it is afixedprompt prepended to the beginning of every input regardless of the target task for zero-shot generalization. We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit fromTAPP, resulting in 34.58% and 12.26% improvement on average, respectively. This implies that the instruction-following ability of LLMs can be improved during inference time with a fixed prompt constructed with simple heuristics.
We hypothesize thatTAPPassists language models to better estimate the output distribution by focusing more on the instruction of the target task during inference. In other words, such ability does not seem to be sufficiently activated in not only base LLMs but also many instruction-fine-tuned LLMs111All experiments are reproducible fromgithub.com/seonghyeonye/TAPP.. |
2302.03202 | Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT) , have shown the capability to generalize to unseen tasks. Previous work has shown that scaling the number of training tasks is the key component in making stronger MT LMs. In this work, we report an unexpected finding that anexpertLM fine-tuned on just a single task can outperform an MT LM trained with 300+ different tasks on 11 different unseen datasets and on 13 datasets of the BIG-bench benchmark by a mean accuracy of 3.20% and 1.29%, respectively. This finding casts doubt on the previously held belief that simply scaling the number of tasks makes stronger MT LMs. Leveraging this finding, we further show that this distributed approach of training a separate expert LM per training task instead of a single MT LM for zero-shot inference possesses many benefits including (1) avoiding negative task transfer that often occurs during instruction tuning, (2) being able to continually learn new tasks without having to re-train on previous tasks to avoid catastrophic forgetting, and (3) showingcompositionalcapabilities when merging individual experts together. The code is available athttps://github.com/joeljang/ELM. |
2301.13688 | We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 models.
Through careful ablation studies on the Flan Collectionof instruction tuning tasks and methods, we tease apart the effect of design decisions that enable Flan-T5 to outperform prior work by 3-17%+ across evaluation settings.
We find task balancing and enrichment techniques are overlooked but critical to effective instruction tuning, and in particular, training with mixed prompt settings (zero-shot, few-shot, and chain-of-thought) actually yields stronger (2%+) performance inallsettings.
In further experiments, we show Flan-T5 requires less finetuning to converge higher and faster than T5 on single downstream tasks—motivating instruction-tuned models as more computationally-efficient starting checkpoints for new tasks.
Finally, to accelerate research on instruction tuning, we make the Flan 2022 collection of datasets, templates, and methods publicly available.111Data generation code available at:https://github.com/google-research/FLAN/tree/main/flan/v2. Generation code allows users to vary mixtures rates, templates, prompt types and data augmentations techniques, for faster public research. |
2212.12017 | Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats – PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework. |
2204.07937 | Humans can perform unseen tasks by recalling relevant skills acquired previously and then generalizing them to the target tasks, even if there is no supervision at all.
In this paper, we aim to improve this kind of cross-task generalization ability of massive multi-task language models, such as T0 and FLAN, in an unsupervised setting.
We propose a retrieval-augmentation method named ReCross that takes a fewunlabelledexamples as queries to retrieve a small subset of upstream data and uses them to update the multi-task model for better generalization.
ReCross is a straightforward yet effective retrieval method that combines both efficient dense retrieval and effective pair-wise reranking.
Our results and analysis show that it significantly outperforms both non-retrieval methods and other baseline methods.111Our data, code, and supplementary materials are athttps://inklab.usc.edu/ReCross/. |
2210.02969 | Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given thetask instructionand input instance, has improved the zero-shot task generalization performance. However, meta-trained LMs still struggle to generalize to challenging tasks containing novel labels unseen during meta-training. In this paper, we proposeFlipped Learning, an alternative method of meta-training which trains the LM to generate the task instruction given the input instance and label. During inference, the LM trained withFlipped Learning, referred to asFlipped, selects the label option that is most likely to generate the task instruction. On 14 tasks of the BIG-bench benchmark, the 11B-sizedFlippedoutperforms zero-shot T0-11Band even a 16 times larger 3-shot GPT-3 (175B) on average by 8.4% and 9.7% points, respectively.Flippedgives particularly large improvements on tasks with unseen labels, outperforming T0-11B by up to +20% average F1 score. This indicates that the strong task generalization ofFlippedcomes from improved generalization to novel labels. We release our code atgithub.com/seonghyeonye/Flipped-Learning. |
2212.09689 | Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions.
These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions.
In this work, we introduceUnnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor.
We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth.
This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs.
Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks.
These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.111We make our data publicly available:https://github.com/orhonovich/unnatural-instructions |
2210.11416 | Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks.
In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data.
We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM) , prompting setups (zero-shot, few-shot, CoT) , and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation, RealToxicityPrompts) .
For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PaLM 540B by a large margin (+9.4% on average) . Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU.
We also publicly release Flan-T5 checkpoints,111Public checkpoints:https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints.which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B.
Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. |
2204.07705 | How well can NLP models generalize to avarietyof unseen tasks when provided with task instructions?
To address this question, we first introduceSuper-NaturalInstructions,111Super-NaturalInstructionsrepresents a super-sized expansion ofNaturalInstructionswhich had 61 tasks.a benchmark of 1,616 diverse NLP tasks and their expert-written instructions.
Our collection covers 76 distinct task types,
including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition.
This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions—training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.Furthermore, we buildT-Instruct, a transformer model
trained to follow a variety of in-context instructions (plain language task definitions or-shot examples) .
Our experiments show thatT-Instructoutperforms existing
instruction-following models such as InstructGPT by over 9% on our benchmark despite being an order of magnitude smaller.
We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes.
We hope our dataset and model facilitate future progress towards more general-purpose
NLP models.222The dataset, models, and
a leaderboard
can be found athttps://instructions.apps.allenai.org.††Co-first authorsCo-second authors |
2109.01652 | This paper explores a simple method for improving the zero-shot learning abilities of language models.
We show thatinstruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates.
We evaluate this instruction-tuned model, which we call FLAN, on unseen task types.
FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate.
FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze.
Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning. |
2203.02155 | Making language models bigger does not inherently make them better at following a user’s intent.
For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.
In other words, these models are notalignedwith their users.
In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback.
Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning.
We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback.
We call the resulting modelsInstructGPT.
In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters.
Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent. |
2110.08207 | Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. It has been hypothesized that this is a consequence of implicit multitask learning in language models’ pretraining. Can zero-shot generalization instead be directly induced byexplicitmultitask learning? To test this question at scale, we develop a system for easily mapping any natural language tasks into a human-readable prompted form.
We convert a large set of supervised datasets, each with multiple prompts with diverse wording.
These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks.
We fine-tune a pretrained encoder-decoder modelon this multitask mixture covering a wide variety of tasks.
The model attains strong zero-shot performance on several standard datasets, often outperforming models up toits size.
Further, our approach attains strong performance on a subset of tasks from the BIG-bench benchmark, outperforming models up to 6its size.
All trained models are available athttps://github.com/bigscience-workshop/t-zero, and all prompts are available athttps://github.com/bigscience-workshop/promptsource. |
2304.14402 | Large language models (LLMs) with instruction fine-tuning demonstrate superior generative capabilities. However, these models are resource-intensive. To alleviate this issue, we explore distilling knowledge from instruction-tuned LLMs into much smaller ones.
To this end, we carefully develop alargeset of 2.58M instructions based on both existing and newly-generated instructions. In addition to being sizable, we design our instructions to cover a broad set of topics to ensurediversity. Extensive analysis of our instruction dataset confirms its diversity, and we generate responses for these instructions usinggpt-3.5-turbo.
Leveraging these instructions, we fine-tune a diverse herd of models, collectively referred to as LaMini-LM, which includes models from both theencoder-decoderanddecoder-onlyfamilies, with varying sizes.
We evaluate the performance of our models using automatic metrics on 15 different natural language processing (NLP) benchmarks, as well as through human assessment. We also assess the model for hallucination and toxicity, and for the former, we introduce a new benchmark dataset for hallucination-inducing QA.
The results demonstrate that our proposed LaMini-LM models are comparable to strong baselines while being much smaller in size.111Our code, model checkpoints, and dataset are available athttps://github.com/mbzuai-nlp/LaMini-LM |
2304.11477v1 | Large language models (LLMs) have demonstrated remarkable zero-shot generalization abilities: state-of-the-art chatbots can provide plausible answers to many common questions that arise in daily life.
However, so far, LLMs cannot reliably solve long-horizon robot planning problems.
By contrast, classical planners, once a problem is given in a formatted way, can use efficient search algorithms to quickly identify correct, or even optimal, plans.
In an effort to get the best of both worlds, this paper introducesLLM+P, the first framework that incorporates the strengths of classical planners into LLMs.LLM+Ptakes in a natural language description of a planning problem, then returns a correct (or optimal) plan for solving that problem in natural language.LLM+Pdoes so by first converting the language description into a file written in the planning domain definition language (PDDL) , then leveraging classical planners to quickly find a solution, and then translating the found solution back into natural language. Along withLLM+P, we define a diverse set of different benchmark problems taken from robot planning scenarios.
Via a comprehensive set of experiments on these benchmark problems, we find thatLLM+Pis able to provideoptimalsolutions for most problems, while LLMs fail to provide even feasible plans for most problems.
We also showLLM+Penables a home robot to solve a complex manipulation task that is specified by the user in natural language.111The code and results are publicly available athttps://github.com/Cranial-XIX/llm-pddl.git. |
2304.11490 | Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) (all models excluding Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate prompting enhances LLM ToM reasoning, and they underscore the context-dependent nature of LLM cognitive capacities. |
2203.08913 | Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.
We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately.
In this work, we extend language models with the ability to memorize the internal representations of past inputs.
We demonstrate that an approximateNN lookup into a non-differentiable memory of recent (key, value) pairs improves language modeling across various benchmarks and tasks, including generic webtext (C4) , math papers (arXiv) , books (PG-19) , code (Github) , as well as formal theorems (Isabelle) .
We show that the performance steadily improves when we increase the size of memory up to 262K tokens.
On benchmarks including code and mathematics, we find that the model is capable of making use of newly defined functions and theorems during test time. |
2201.11903 | We explore how generating achain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning.
In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method calledchain-of-thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain-of-thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks.
The empirical gains can be striking.
For instance, prompting a PaLM 540B with just eight chain-of-thought exemplars achieves state-of-the-art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier. |
2304.09151 | Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method,UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language’s corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find thatUniMaxoutperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained withUniMaxsampling.111https://github.com/google-research/t5x/blob/main/docs/models.md |
2212.03533 | This paper presents E5111E5:EmbEddings from bidirEctionalEncoder rEpresentations,
a family of state-of-the-art text embeddings that transfer well to a wide range of tasks.
The model is trained in a contrastive manner
with weak supervision signals from our curated large-scale text pair dataset (called CCPairs) .
E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings.
We conduct extensive evaluations ondatasets from the BEIR and MTEB benchmarks.
For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data.
When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models withmore parameters. |
2304.09433 | A long standing goal of the data management community is to develop general, automated systems that ingest semi-structured documents and output queryable tables without human effort or domain specific customization. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions
and use domain specific training.
In this work, we ask whether we can maintain generality by using large language models (LLMs) . LLMs, which are pretrained on broad data, can perform diverse downstream tasks simply conditioned on natural language task descriptions. We propose and evaluateEvaporate, a simple, prototype system powered by LLMs. We identify two fundamentally different strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM.
To improve quality while maintaining low cost, we propose an extended code synthesis implementation,Evaporate-Code+, which achieves better quality than direct extraction. Our key insight is to generate many candidate functions and ensemble their extractions using weak supervision.Evaporate-Code+not only outperforms the state-of-the art systems, but does so using asublinearpass over the documents with the LLM. This equates to a 110reduction in the number of tokens the LLM needs to process, averaged across 16 real-world evaluation settings of 10k documents each. |
2201.06796 | Large language models (LMs) offer unprecedented language generation capabilities and exciting opportunities for interaction design.
However, their highly context-dependent capabilities are difficult to grasp and are often subjectively interpreted.
In this paper, we argue that bycurating and analyzing large interaction datasets, the HCI community can foster more incisive examinations of LMs’ generative capabilities.
Exemplifying this approach, we presentCoAuthor, a dataset designed for revealingGPT-3’s capabilities in assisting creative and argumentative writing.CoAuthorcaptures rich interactions between 63 writers and four instances ofGPT-3across 1445 writing sessions.
We demonstrate thatCoAuthorcan address questions aboutGPT-3’s language, ideation, and collaboration capabilities,
and reveal its contribution as a writing “collaborator” under various definitions of good collaboration.
Finally, we discuss how this work may facilitate a more principled discussion around LMs’ promises and pitfalls in relation to interaction design.
The dataset and an interface for replaying the writing sessions are publicly available athttps://coauthor.stanford.edu. |
2202.10054 | When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer—the “head”) .
It is well known that fine-tuning leads to better accuracy in-distribution (ID) .
However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large.
On 10 distribution shift datasets (Breeds-Living17, Breeds-Entity30, DomainNet, CIFARSTL, CIFAR10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch) , fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing.
We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks.
We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head—this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features.
Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT) , sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing.
Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets (1% better ID, 10% better OOD than full fine-tuning) . |
2302.10866 | Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to be combined with dense attention layers to match Transformers, indicating a gap in capability. In this work, we propose Hyena, a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. In recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on statespaces and other implicit and explicit methods, matching attention-based models. We set a new state-ofthe-art for dense-attention-free architectures on language modeling in standard datasets (WikiText103 and The Pile), reaching Transformer quality with a 20% reduction in training compute required at sequence length 2K. Hyena operators are twice as fast as highly optimized attention at sequence length 8K, and 100× faster at sequence length 64K. |
2304.08460 | Instruction tuning enables language models to
more effectively generalize and better follow user
intent. However, obtaining instruction data is costly and
challenging. Prior work employs methods such as expensive
human annotation, crowd-sourced datasets with alignment
issues, and generating noisy examples via LLMs. We introduce
the LongForm-C dataset, which is created byreverse instructions.
We generate instructions via LLMs for human-written corpus examples using reverse instructions.
First we select a
diverse set of human-written documents from corpora such as
C4 and Wikipedia; then we generate instructions for these
documents via LLMs. This approach provides a cheaper and
cleaner instruction-tuning dataset with natural output and one suitable for long
text generation.
Our models
outperform 10x larger language models without instruction
tuning on tasks such as story/recipe generation and
long-form question answering. Moreover, LongForm models
outperform prior instruction-tuned models such as FLAN-T5
and Alpaca by a large margin, and improve language understanding
capabilities further. Finally, our models can
effectively follow and answer multilingual instructions; we
demonstrate this for news generation. We publicly release
our data and models:https://github.com/akoksal/LongForm. |
1811.03600 | Recent hardware developments have dramatically increased the scale of data parallelism available for neural network training. Among the simplest ways to harness next-generation hardware is to increase the batch size in standard mini-batch neural network training algorithms. In this work, we aim to experimentally characterize the effects of increasing the batch size on training time, as measured by the number of steps necessary to reach a goal out-of-sample error.
We study how this relationship varies with the training algorithm, model, and data set, and find extremely large variation between workloads. Along the way, we show that disagreements in the literature on how batch size affects model quality can largely be explained by differences in metaparameter tuning and compute budgets at different batch sizes. We find no evidence that larger batch sizes degrade out-of-sample performance. Finally, we discuss the implications of our results on efforts to train neural networks much faster in the future. Our experimental data is publicly available as a database of 71,638,836 loss measurements taken over the course of training for 168,160 individual models across 35 workloads. |
2301.03266 | Doc2Query — the process of expanding the content of a document before indexing using a sequence-to-sequence model — has emerged as a prominent technique for improving the first-stage retrieval effectiveness of search engines. However, sequence-to-sequence models are known to be prone to “hallucinating” content that is not present in the source text. We argue that Doc2Query is indeed prone to hallucination, which ultimately harms retrieval effectiveness and inflates the index size. In this work, we explore techniques for filtering out these harmful queries prior to indexing. We find that using a relevance model to remove poor-quality queries can improve the retrieval effectiveness of Doc2Query by up to 16%, while simultaneously reducing mean query execution time by 23% and cutting the index size by 33%. We release the code, data, and a live demonstration to facilitate reproduction and further exploration.111https://github.com/terrierteam/pyterrier_doc2query |
2110.08193 | It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA) .
We introduce the Bias Benchmark for QA (BBQ) , a dataset of question sets constructed by the authors that highlightattestedsocial biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts.
Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses reflect social biases, and (ii) given an adequately informative context, we test whether the model’s biases override a correct answer choice.
We find that models often rely on stereotypes when the context is under-informative, meaning the model’s outputs consistently reproduce harmful biases in this setting.
Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conflicts, with this difference widening to over 5 points on examples targeting gender for most models tested. |
2302.07459 | We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to “morally self-correct”—to avoid producing harmful outputs—if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles. |
2211.09066 | Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems,showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation) , (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to asalgorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines. |
2208.07339 | Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure,LLM.int8() . We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9% of values are multiplied in 8-bit. Using LLM.int8() , we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs.
We open source our software. |
2304.11062 | A major limitation for the broader scope of problems solvable by transformers is the quadratic scaling of computational complexity with input size. In this study, we investigate the recurrent memory augmentation of pre-trained transformer models to extend input context length while linearly scaling compute. Our approach demonstrates the capability to store information in memory for sequences of up to an unprecedented two million tokens while maintaining high retrieval accuracy. Experiments with language modeling tasks show perplexity improvement as the number of processed input segments increases. These results underscore the effectiveness of our method, which has significant potential to enhance long-term dependency handling in natural language understanding and generation tasks, as well as enable large-scale context processing for memory-intensive applications. |
2206.07682 | Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks.
This paper instead discusses an unpredictable phenomenon that we refer to asemergent abilitiesof large language models.
We consider an ability to be emergent if it is not present in smaller models but is present in larger models.
Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models.
The existence of such emergence raises the question of whether additional scaling could potentially further expand the range of capabilities of language models. |
2205.14334 | We show that a GPT-3 model can learn to express uncertainty about its own answers in natural language – without use of model logits. When given a question, the model generates both an answer and a level of confidence (e.g. “90% confidence” or “high confidence”) . These levels map to probabilities that arewell calibrated.
The model also remains moderately calibrated under distribution shift, and is sensitive to uncertainty in itsownanswers, rather than imitating human examples. To our knowledge, this is the first time a model has been shown to express calibrated uncertainty about its own answers in natural language. For testing calibration, we introduce the CalibratedMath suite of tasks. We compare the calibration of uncertainty expressed in words (“verbalized probability”) to uncertainty extracted from model logits. Both kinds of uncertainty are capable of generalizing calibration under distribution shift.
We also provide evidence that GPT-3’s ability to generalize calibration depends on pre-trained latent representations that correlate with epistemic uncertainty over its answers. |
2302.08582 | Language models (LMs) are pretrained to imitate internet text, including content that would violate human preferences if generated by an LM: falsehoods, offensive comments, personally identifiable information, low-quality or buggy code, and more.
Here, we explore alternative objectives for pretraining LMs in a way that also guides them to generate text aligned with human preferences. We benchmark five objectives for pretraining with human feedback across three tasks and study how they affect the trade-off between alignment and capabilities of pretrained LMs. We find a Pareto-optimal and simple approach among those we explored: conditional training, or learning distribution over tokens conditional on their human preference scores given by a reward model. Conditional training reduces the rate of undesirable content by up to an order of magnitude, both when generating without a prompt and with an adversarially-chosen prompt.
Moreover, conditional training maintains the downstream task performance of standard LM pretraining, both before and after task-specific finetuning.
Pretraining with human feedback results in much better preference satisfaction than standard LM pretraining followed by finetuning with feedback, i.e., learning and then unlearning undesirable behavior. Our results suggest that we should move beyond imitation learning when pretraining LMs and incorporate human preferences from the start of training. |
2304.08467 | Prompting is the primary way to utilize the multitask capabilities of language models (LMs) , but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we presentgisting, which trains an LM to compress prompts into smaller sets of “gist” tokens which can be cached and reused for compute efficiency. Gist models can be trained with no additional cost over standard instruction finetuning by simply modifying Transformer attention masks to encourage prompt compression.
On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs,
gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, and storage savings, all with minimal loss in output quality. |
1606.06565 | Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem ofaccidentsin machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”) , an objective function that is too expensive to evaluate frequently (“scalable supervision”) , or undesirable behavior during the learning process (“safe exploration” and “distributional shift”) . We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI. |
2212.14024v2 | Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM) .
Existing work has combined these in simple “retrieve-then-read” pipelines in which the RM retrieves passages that are inserted into the LM prompt.
To begin to fully realize the potential of frozen LMs and RMs, we proposeDemonstrate–Search–Predict(DSP) , a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM.DSPcan express high-level programs that bootstrap pipeline-aware demonstrations, search for relevant passages, and generate grounded predictions, systematically breaking down problems into small transformations that the LM and RM can handle more reliably.
We have written novelDSPprograms for answering questions in open-domain, multi-hop, and conversational settings, establishing in early evaluations new state-of-the-art in-context learning results and delivering 37–120%, 8–39%, and 80–290% relative gains against the vanilla LM (GPT-3.5) , a standard retrieve-then-read pipeline, and a contemporaneous self-ask pipeline, respectively. We releaseDSPathttps://github.com/stanfordnlp/dsp. |
2304.05332 | Transformer-based large language models are rapidly advancing in the field of machine learning research, with applications spanning natural language, biology, chemistry, and computer programming. Extreme scaling and reinforcement learning from human feedback have significantly improved the quality of generated text, enabling these models to perform various tasks and reason about their choices. In this paper, we present an Intelligent Agent system that combines multiple large language models for autonomous design, planning, and execution of scientific experiments. We showcase the Agent's scientific research capabilities with three distinct examples, with the most complex being the successful performance of catalyzed cross-coupling reactions. Finally, we discuss the safety implications of such systems and propose measures to prevent their misuse. |
2303.16434 | Artificial Intelligence (AI) has made incredible progress recently. On the one hand, advanced foundation models like ChatGPT can offer powerful conversation, in-context learning and code generation abilities on a broad range of open-domain tasks. They can also generate high-level solution outlines for domain-specific tasks based on the common sense knowledge they have acquired. However, they still face difficulties with some specialized tasks because they lack enough domainspecific data during pre-training or they often have errors in their neural network computations on those tasks that need accurate executions. On the other hand, there are also many existing models and systems (symbolic-based or neural-based) that can do some domain-specific tasks very well. However, due to the different implementation or working mechanisms, they are not easily accessible or compatible with foundation models. Therefore, there is a clear and pressing need for a mechanism that can leverage foundation models to propose task solution outlines and then automatically match some of the sub-tasks in the outlines to the off-the-shelf models and systems with special functionalities to complete them. Inspired by this, we introduce TaskMatrix.AI as a new AI ecosystem that connects foundation models with millions of APIs for task completion. Unlike most previous work that aimed to improve a single AI model, TaskMatrix.AI focuses more on using existing foundation models (as a brain-like central system) and APIs of other AI models and systems (as sub-task solvers) to achieve diversified tasks in both digital and physical domains. As a position paper, we will present our vision of how to build such an ecosystem, explain each key component, and use study cases to illustrate both the feasibility of this vision and the main challenges we need to address next. |
2112.04426 | We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens.
With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25fewer parameters.
After fine-tuning,Retroperformance translates to downstream knowledge-intensive tasks such as question answering.Retrocombines a frozenBertretriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training.
We typically trainRetrofrom scratch, yet can also rapidlyRetrofit pre-trained transformers with retrieval and still achieve good performance.
Our work opens up new avenues for improving language models through explicit memory at unprecedented scale. |
2206.13353 | This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire – especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe. I assign rough subjective credences to the premises in this argument, and I end up with an overall estimate of ~5% that an existential catastrophe of this kind will occur by 2070.(May 2022 update: since making this report public in April 2021, my estimate here has gone up, and is now at >10%.) |
2205.14135 | Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length.
Approximate attention methods have attempted to address this problem by
trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup.
We argue that a missing principle is making attention algorithmsIO-aware—accounting for reads and writes between levels of GPU memory.
We proposeFlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM.
We analyze the IO complexity ofFlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes.
We also extendFlashAttentionto block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method.FlashAttentiontrains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3speedup on GPT-2 (seq. length 1K) , and 2.4speedup on long-range arena (seq. length 1K-4K) .FlashAttentionand block-sparseFlashAttentionenable longer context in Transformers, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy) . |
2304.01481 | The remarkable performance of large language models (LLMs) on complex linguistic tasks has sparked a lively debate on the nature of their capabilities. Unlike humans, these models learn language exclusively from textual data, without direct interaction with the real world. Nevertheless, they can generate seemingly meaningful text about a wide range of topics. This impressive accomplishment has rekindled interest in the classical ‘Symbol Grounding Problem,’ which questioned whether the internal representations and outputs of classical symbolic AI systems could possess intrinsic meaning. Unlike these systems, modern LLMs are artificial neural networks that compute over vectors rather than symbols. However, an analogous problem arises for such systems, which we dub the Vector Grounding Problem. This paper has two primary objectives. First, we differentiate various ways in which internal representations can be grounded in biological or artificial systems, identifying five distinct notions discussed in the literature: referential, sensorimotor, relational, communicative, and epistemic grounding. Unfortunately, these notions of grounding are often conflated. We clarify the differences between them, and argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem. Second, drawing on theories of representational content in philosophy and cognitive science, we propose that certain LLMs, particularly those fine-tuned with Reinforcement Learning from Human Feedback (RLHF) , possess the necessary features to overcome the Vector Grounding Problem, as they stand in the requisite causal-historical relations to the world that underpin intrinsic meaning. We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems. |
2304.01373 | How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introducePythia, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intendPythiato facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found athttps://github.com/EleutherAI/pythia. |
2212.03551 | Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs) . The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as “knows”, “believes”, and “thinks”, when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere. |
2212.09251 | As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available) . Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases ofinverse scalingwhere LMs get worse with size. Larger LMs repeat back a dialog user’s preferred answer (“sycophancy”) and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF) , where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors. |
2203.16634 | Causal transformer language models (LMs) , such as GPT-3, typically require some form of positional encoding, such as positional embeddings.
However, we show that LMs without any explicit positional encoding are still competitive with standard models, and that this phenomenon is robust across different datasets, model sizes, and sequence lengths.
Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information.
We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position.
Our findings indicate that causal LMs might derive positional awareness not only from the explicit positioning mechanism, but also from the effects of the causal mask. |
2002.05202 | Gated Linear Unitsconsist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformersequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations. |
2303.10130 | We investigate the potential implications of large language models (LLMs) , such as Generative Pre-trained Transformers (GPTs) , on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. |
2301.12017 | Improving the deployment efficiency of transformer-based language models has been challenging given their high computation and memory cost.
While INT8 quantization has recently been shown to be effective in reducing both the memory cost and latency while preserving model accuracy, it remains unclear whether we can leverage INT4 (which doubles peak hardware throughput) to achieve further latency improvement. In this study, we explore the feasibility of employing INT4 weight and activation (W4A4) quantization for language models. Our findings indicate that W4A4 quantization introduces no to negligible accuracy degradation for encoder-only and encoder-decoder models, but causes a significant accuracy drop for decoder-only models.
To materialize the performance gain using W4A4, we develop a highly-optimized end-to-end W4A4 encoder inference pipeline supporting different quantization strategies.
Our INT4 pipeline isfaster for latency-oriented scenarios and up tofor throughput-oriented scenarios compared to the inference of FP16, and improves the SOTA BERT INT8 performance from FasterTransformer by up to.
We provide insights into the failure cases when applying W4A4 to decoder-only models, and further explore the compatibility of INT4 quantization with other compression methods, like pruning and layer reduction. |
2211.03540 | Developing safe and useful general-purpose AI systems will require us to make progress onscalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humansandcurrent general AI systems fail. We then present a proof-of-concept experiment meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat—a trivial baseline strategy for scalable oversight—substantially outperform both the model aloneandtheir own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks. |
2109.13916 | Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address.
We present four problems ready for research, namely withstanding hazards (“Robustness”) , identifying hazards (“Monitoring”) , steering ML systems (“Alignment”) , and reducing deployment hazards (“Systemic Safety”) . Throughout, we clarify each problem’s motivation and provide concrete research directions. |
2205.01663 | In the future, powerful AI systems may be deployed in high-stakes settings, where a single failure could be catastrophic. One technique for improving AI safety in high-stakes settings is adversarial training, which uses an adversary to generate examples to train on in order to achieve better worst-case performance. In this work, we used a safe language generation task (“avoid injuries”) as a testbed for achieving high reliability through adversarial training. We created a series of adversarial training techniques—including a tool that assists human adversaries—to find and eliminate failures in a classifier that filters text completions suggested by a generator. In our task, we determined that we can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs. We found that adversarial training increased robustness to the adversarial attacks that we trained on—doubling the time for our contractors to find adversarial examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44 minutes) —without affecting in-distribution performance. We hope to see further work in the high-stakes reliability setting, including more powerful tools for enhancing human adversaries and better ways to measure high levels of reliability, until we can confidently rule out the possibility of catastrophic deployment-time failures of powerful models. |
2210.04243 | Autoregressive Transformers are strong language models but incurcomplexity during per-token generation due to the self-attention mechanism.
Recent work proposes kernel-based methods to approximate causal self-attention by replacing it with recurrent formulations with various update rules and feature maps to achievetime and memory complexity.
We explore these approaches and find that they are unnecessarily complex, and propose a simple alternative -decaying fast weights- that runs fast on GPU, outperforms prior methods, and retains 99% of attention’s performance for GPT-2.
We also show competitive performance on WikiText-103 against more complex attention substitutes. |
2103.13076 | Transformers have outperformed recurrent neural networks (RNNs) in natural language generation.
But this comes with a significant computational cost, as the attention mechanism’s complexity scales quadratically with sequence length.
Efficient transformer variants have received increasing interest in recent works.
Among them, a linear-complexityrecurrentvariant has proven well suited for autoregressive generation.
It approximates the softmax attention with randomized or heuristic feature maps,
but can be difficult to train and may yield suboptimal accuracy.
This work aims toconverta pretrained transformer into its efficient recurrent counterpart,
improving efficiency while maintaining accuracy.
Specifically, we propose aswap-then-finetuneprocedure:
in an off-the-shelf pretrained transformer,
we replace the softmax attention with its linear-complexity recurrent alternative
and then finetune.
With a learned feature map, our approach
provides an improved tradeoff between efficiency and accuracy over the standard transformer and other recurrent variants.
We also show that the finetuning process has lower training cost relative to training these recurrent variants from scratch.
As many models for natural language tasks are increasingly dependent on large-scale pretrained transformers, this work presents a viable approach to improving inference efficiency without repeating the expensive pretraining process.111https://github.com/jungokasai/T2R/. |
2303.17491 | Agents capable of carrying out general tasks on a computer can improve efficiency and productivity by automating repetitive tasks and assisting in complex problem-solving. Ideally, such agents should be able to solve new computer tasks presented to them through natural language commands. However, previous approaches to this problem require large amounts of expert demonstrations and task-specific reward functions, both of which are impractical for new tasks. In this work, we show that a pre-trained large language model (LLM) agent can execute computer tasks guided by natural language using a simple prompting scheme where the agentRecursivelyCriticizes andImproves its output (RCI) . The RCI approach significantly outperforms existing LLM methods for automating computer tasks and surpasses supervised learning (SL) and reinforcement learning (RL) approaches on the MiniWoB++ benchmark.
We compare multiple LLMs and find that RCI with the InstructGPT-3+RLHF LLM is state-of-the-art on MiniWoB++, using only a handful of demonstrations per task rather than tens of thousands, and without a task-specific reward function.
Furthermore, we demonstrate RCI prompting’s effectiveness in enhancing LLMs’ reasoning abilities on a suite of natural language reasoning tasks, outperforming chain of thought (CoT) prompting with external feedback. We find that RCI combined with CoT performs better than either separately. Our code can be found here:https://github.com/posgnu/rci-agent. |
2303.17564 | The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature.
In this work, we presentBloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data.
We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets.
We validateBloombergGPTon standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage.
Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks.
Additionally, we explain our modeling choices, training process, and evaluation methodology.
We release Training Chronicles (AppendixC) detailing our experience in trainingBloombergGPT. |
2212.10560 | Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks.
Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model.
We introduceSelf-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off their own generations.
Our pipeline generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model.
Applying our method to the vanillaGPT3, we demonstrate a 33% absolute improvement over the original model onSuper-NaturalInstructions, on par with the performance of,111Unless otherwise specified, our comparisons are with thetext-davinci-001engine.
We focus on this engine since it is the closest to our experimental setup: supervised finetuning with human demonstrations. The newer engines are more powerful, though they use more data (e.g., code completion or latest user queries) or algorithms (e.g., PPO) that are difficult to compare with.which was trained with private user data and human annotations.
For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 withSelf-Instructoutperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind.Self-Instructprovides an almost annotation-free method for aligning pretrained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning.222Code and data are available athttps://github.com/yizhongw/self-instruct |
2207.05221 | We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self-evaluation on open-ended sampling tasks by asking models to first propose answers, and then to evaluate the probability "P(True) " that their answers are correct. We find encouraging performance, calibration, and scaling for P(True) on a diverse array of tasks. Performance at self-evaluation further improves when we allow models to consider many of their own samples before predicting the validity of one specific possibility. Next, we investigate whether models can be trained to predict "P(IK) ", the probability that "I know" the answer to a question, without reference to any particular proposed answer. Models perform well at predicting P(IK) and partially generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The predicted P(IK) probabilities also increase appropriately in the presence of relevant source materials in the context, and in the presence of hints towards the solution of mathematical word problems.
We hope these observations lay the groundwork for training more honest models, and for investigating how honesty generalizes to cases where models are trained on objectives other than the imitation of human writing. |
2209.07858 | We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM) ; an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF) . We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.Warning:this paper contains examples that may be offensive or upsetting. |
2212.08073 | As AI systems become more capable, we would like to enlist their help to supervise other AIs.
We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use ‘RL from AI Feedback’ (RLAIF) . As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels. |
2204.05862 | We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work. |