Dataset Viewer
Auto-converted to Parquet
text
stringlengths
0
1.44k
## LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
**Edward Hu\*** **Yelong Shen\*** **Phillip Wallis** **Zeyuan Allen-Zhu** **Lu Wang** **Weizhu Chen** **Yuanzhi Li** **Shean Wang**
Microsoft Corporation
{edwardhu, yeshe, phwallis, zeyuana, yuanzhil, swang, luw, wzchen}@microsoft.com
yuanzhil@andrew.cmu.edu
(Version 2)
**ABSTRACT**
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example – deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LORA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LORA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.
**1 INTRODUCTION**
Many applications in natural language processing rely on adapting one large-scale, pre-trained language model to multiple downstream applications. Such adaptation is usually done via fine-tuning, which updates all the parameters of the pre-trained model. The major downside of fine-tuning is that the new model contains as many parameters as in the original model. As larger models are trained every few months, this changes from a mere “inconvenience" for GPT-2 (Radford et al., b) or RoBERTa large (Liu et al., 2019) to a critical deployment challenge for GPT-3 (Brown et al., 2020) with 175 billion trainable parameters.¹
Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks. This way, we only need to store and load a small number of task-specific parameters in addition to the pre-trained model for each task, greatly boosting the operational efficiency when deployed. However, existing techniques often introduce inference latency (Houlsby et al., 2019; Rebuffi et al., 2017) by extending model depth or reduce the model's usable sequence length (Li & Liang, 2021; Lester et al., 2021; Hambardzumyan et al., 2020; Liu et al., 2021) (Section 3). More importantly, these method often fail to match the fine-tuning baselines, posing a trade-off between efficiency and model quality.
We take inspiration from Li et al. (2018a); Aghajanyan et al. (2020) which show that the learned over-parametrized models in fact reside on a low intrinsic dimension. We hypothesize that the change in weights during model adaptation also has a low “intrinsic rank”, leading to our proposed Low-Rank Adaptation (LoRA) approach. LoRA allows us to train some dense layers in a neural network indirectly by optimizing rank decomposition matrices of the dense layers' change during adaptation instead, while keeping the pre-trained weights frozen, as shown in Figure 1. Using GPT-3 175B as an example, we show that a very low rank (i.e., *r* in Figure 1 can be one or two) suffices even when the full rank (i.e., *d*) is as high as 12,288, making LoRA both storage- and compute-efficient.
<div align="center">
<img src="lora_figure1.png" width="300"/>
<p>Figure 1: Our reparametrization. We only train A and B.</p>
</div>
LORA possesses several key advantages.
* A pre-trained model can be shared and used to build many small LoRA modules for different tasks. We can freeze the shared model and efficiently switch tasks by replacing the matrices A and B in Figure 1, reducing the storage requirement and task-switching overhead significantly.
* LORA makes training more efficient and lowers the hardware barrier to entry by up to 3 times when using adaptive optimizers since we do not need to calculate the gradients or maintain the optimizer states for most parameters. Instead, we only optimize the injected, much smaller low-rank matrices.
* Our simple linear design allows us to merge the trainable matrices with the frozen weights when deployed, introducing no inference latency compared to a fully fine-tuned model, by construction.
* LORA is orthogonal to many prior methods and can be combined with many of them, such as prefix-tuning. We provide an example in Appendix E.
**Terminologies and Conventions** We make frequent references to the Transformer architecture and use the conventional terminologies for its dimensions. We call the input and output dimension size of a Transformer layer *d<sub>model</sub>*. We use W<sub>q</sub>, W<sub>k</sub>, W<sub>v</sub>, and W<sub>o</sub> to refer to the query/key/value/output projection matrices in the self-attention module. W or W<sub>o</sub> refers to a pre-trained weight matrix and ΔW its accumulated gradient update during adaptation. We use *r* to denote the rank of a LoRA module. We follow the conventions set out by (Vaswani et al., 2017; Brown et al., 2020) and use Adam (Loshchilov & Hutter, 2019; Kingma & Ba, 2017) for model optimization and use a Transformer MLP feedforward dimension *d<sub>ffn</sub>* = 4 × *d<sub>model</sub>*.
**2 PROBLEM STATEMENT**
While our proposal is agnostic to training objective, we focus on language modeling as our motivating use case. Below is a brief description of the language modeling problem and, in particular, the maximization of conditional probabilities given a task-specific prompt.
Suppose we are given a pre-trained autoregressive language model *P<sub>Φ</sub>(y|x)* parametrized by Φ. For instance, *P<sub>Φ</sub>(y|x)* can be a generic multi-task learner such as GPT (Radford et al., b; Brown et al., 2020) based on the Transformer architecture (Vaswani et al., 2017). Consider adapting this pre-trained model to downstream conditional text generation tasks, such as summarization, machine reading comprehension (MRC), and natural language to SQL (NL2SQL). Each downstream task is represented by a training dataset of context-target pairs: *Z* = {(*x<sub>i</sub>*, *y<sub>i</sub>*)}<sub>*i*=1,…,*N*</sub>, where both *x<sub>i</sub>* and *y<sub>i</sub>* are sequences of tokens. For example, in NL2SQL, *x<sub>i</sub>* is a natural language query and *y<sub>i</sub>* its corresponding SQL command; for summarization, *x<sub>i</sub>* is the content of an article and *y<sub>i</sub>* its summary.
During full fine-tuning, the model is initialized to pre-trained weights Φ<sub>0</sub> and updated to Φ<sub>0</sub> + ΔΦ by repeatedly following the gradient to maximize the conditional language modeling objective:
<div align="center">
<img src="lora_equation1.png" width="300"/>
<p>Equation 1</p>
</div>
One of the main drawbacks for full fine-tuning is that for each downstream task, we learn a different set of parameters ΔΦ whose dimension |ΔΦ| equals |Φ<sub>0</sub>|. Thus, if the pre-trained model is large (such as GPT-3 with |Φ<sub>0</sub>| ≈ 175 Billion), storing and deploying many independent instances of fine-tuned models can be challenging, if at all feasible.
In this paper, we adopt a more parameter-efficient approach, where the task-specific parameter increment ΔΦ = ΔΦ(Θ) is further encoded by a much smaller-sized set of parameters Θ with |Θ| « |Φ<sub>0</sub>|. The task of finding ΔΦ thus becomes optimizing over Θ:
<div align="center">
<img src="lora_equation2.png" width="300"/>
<p>Equation 2</p>
</div>
In the subsequent sections, we propose to use a low-rank representation to encode ΔΦ that is both compute- and memory-efficient. When the pre-trained model is GPT-3 175B, the number of trainable parameters |Θ| can be as small as 0.01% of |Φ<sub>0</sub>|.
**3 AREN'T EXISTING SOLUTIONS GOOD ENOUGH?**
The problem we set out to tackle is by no means new. Since the inception of transfer learning, dozens of works have sought to make model adaptation more parameter- and compute-efficient. See Section 6 for a survey of some of the well-known works. Using language modeling as an example, there are two prominent strategies when it comes to efficient adaptations: adding adapter layers (Houlsby et al., 2019; Rebuffi et al., 2017; Pfeiffer et al., 2021; Rücklé et al., 2020) or optimizing some forms of the input layer activations (Li & Liang, 2021; Lester et al., 2021; Hambardzumyan et al., 2020; Liu et al., 2021). However, both strategies have their limitations, especially in a large-scale and latency-sensitive production scenario.
**Adapter Layers Introduce Inference Latency** There are many variants of adapters. We focus on the original design by Houlsby et al. (2019) which has two adapter layers per Transformer block and a more recent one by Lin et al. (2020) which has only one per block but with an additional LayerNorm (Ba et al., 2016). While one can reduce the overall latency by pruning layers or exploiting multi-task settings (Rücklé et al., 2020; Pfeiffer et al., 2021), there is no direct ways to bypass the extra compute in adapter layers. This seems like a non-issue since adapter layers are designed to have few parameters (sometimes <1% of the original model) by having a small bottleneck dimension, which limits the FLOPs they can add. However, large neural networks rely on hardware parallelism to keep the latency low, and adapter layers have to be processed sequentially. This makes a difference in the online inference setting where the batch size is typically as small as one. In a generic scenario without model parallelism, such as running inference on GPT-2 (Radford et al., b) medium on a single GPU, we see a noticeable increase in latency when using adapters, even with a very small bottleneck dimension (Table 1).
<div align="center">
<img src="lora_table1.png" width="500"/>
<p>Table 1: Infernece latency of a single forward pass in GPT-2 medium measured in milliseconds, averaged over 100 trials. We use an NVIDIA Quadro RTX8000. “|Θ|” denotes the number of trainable parameters in adapter layers. Adapter<sup>H</sup> and Adapter<sup>LN</sup> are two variants of adapter tuning, which we describe in Section 5.1. The inference latency introduced by adapter layers can be significant in an online, short-sequence-length scenario. See the full study in Appendix B.</p>
</div>
This problem gets worse when we need to shard the model as done in Shoeybi et al. (2020); Lepikhin et al. (2020), because the additional depth requires more synchronous GPU operations such as AllReduce and Broadcast, unless we store the adapter parameters redundantly many times.
**Directly Optimizing the Prompt is Hard** The other direction, as exemplified by prefix tuning (Li & Liang, 2021), faces a different challenge. We observe that prefix tuning is difficult to optimize and that its performance changes non-monotonically in trainable parameters, confirming similar observations in the original paper. More fundamentally, reserving a part of the sequence length for adaptation necessarily reduces the sequence length available to process a downstream task, which we suspect makes tuning the prompt less performant compared to other methods. We defer the study on task performance to Section 5.
**4 OUR METHOD**
We describe the simple design of LoRA and its practical benefits. The principles outlined here apply to any dense layers in deep learning models, though we only focus on certain weights in Transformer language models in our experiments as the motivating use case.
**4.1 LOW-RANK-PARAMETRIZED UPDATE MATRICES**
A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. (2020) shows that the pre-trained language models have a low “instrisic dimension" and can still learn efficiently despite a random projection to a smaller subspace. Inspired by this, we hypothesize the updates to the weights also have a low “intrinsic rank” during adaptation. For a pre-trained weight matrix W<sub>o</sub> ∈ R<sup>*d*×*k*</sup>, we constrain its update by representing the latter with a low-rank decomposition W<sub>o</sub> + ΔW = W<sub>o</sub> + BA, where B ∈ R<sup>*d*×*r*</sup>, A ∈ R<sup>*r*×*k*</sup>, and the rank *r* < min(*d*, *k*). During training, W<sub>o</sub> is frozen and does not receive gradient updates, while A and B contain trainable parameters. Note both W<sub>o</sub> and ΔW = BA are multiplied with the same input, and their respective output vectors are summed coordinate-wise. For h = W<sub>o</sub>x, our modified forward pass yields:
<div align="center">
<img src="lora_equation3.png" width="350"/>
<p>Equation 3</p>
</div>
We illustrate our reparametrization in Figure 1. We use a random Gaussian initialization for A and zero for B, so ΔW = BA is zero at the beginning of training. We then scale ΔWx by , where *α* is a constant in *r*. When optimizing with Adam, tuning *α* is roughly the same as tuning the learning rate if we scale the initialization appropriately. As a result, we simply set *α* to the first *r* we try and do not tune it. This scaling helps to reduce the need to retune hyperparameters when we vary *r* (Yang & Hu, 2021).
**A Generalization of Full Fine-tuning.** A more general form of fine-tuning allows the training of a subset of the pre-trained parameters. LoRA takes a step further and does not require the accumulated gradient update to weight matrices to have full-rank during adaptation. This means that when applying LoRA to all weight matrices and training all biases², we roughly recover the expressiveness of full fine-tuning by setting the LoRA rank *r* to the rank of the pre-trained weight matrices. In other words, as we increase the number of trainable parameters³, training LoRA roughly converges to training the original model, while adapter-based methods converges to an MLP and prefix-based methods to a model that cannot take long input sequences.
**No Additional Inference Latency.** When deployed in production, we can explicitly compute and store W = W<sub>o</sub> + BA and perform inference as usual. Note that both W<sub>o</sub> and BA are in R<sup>*d*×*k*</sup>. When we need to switch to another downstream task, we can recover W<sub>o</sub> by subtracting BA and then adding a different B'A', a quick operation with very little memory overhead. Critically, this guarantees that we do not introduce any additional latency during inference compared to a fine-tuned model by construction.
**4.2 APPLYING LORA TO TRANSFORMER**
In principle, we can apply LoRA to any subset of weight matrices in a neural network to reduce the number of trainable parameters. In the Transformer architecture, there are four weight matrices in the self-attention module (W<sub>q</sub>, W<sub>k</sub>, W<sub>v</sub>, W<sub>o</sub>) and two in the MLP module. We treat W<sub>q</sub> (or W<sub>k</sub>, W<sub>v</sub>) as a single matrix of dimension *d<sub>model</sub>* × *d<sub>model</sub>*, even though the output dimension is usually sliced into attention heads. We limit our study to only adapting the attention weights for downstream tasks and freeze the MLP modules (so they are not trained in downstream tasks) both for simplicity and parameter-efficiency. We further study the effect on adapting different types of attention weight matrices in a Transformer in Section 7.1. We leave the empirical investigation of adapting the MLP layers, LayerNorm layers, and biases to a future work.
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
52