text
stringlengths
10
701
source
stringclasses
1 value
CS11-711 Advanced NLP Transformers Graham Neubig Sitehttps://phontron. com/class/anlp2024/
anlp-05-transformers.pdf
Reminder: Attention
anlp-05-transformers.pdf
Cross Attention (Bahdanau et al. 2015)Each element in a sequence attends to elements of another sequencethisisanexamplekorewareidesu
anlp-05-transformers.pdf
Self Attention (Cheng et al. 2016, Vaswani et al. 2017)Each element in the sequence attends to elements of that sequence context sensitive encodings!thisisanexamplethisisanexample
anlp-05-transformers.pdf
Calculating Attention (1)Use “query” vector (decoder state) and “key” vectors (all encoder states) For each query-key pair, calculate weight Normalize to add to one using softmax konoeigagakirai Key Vectors Ihate Query Vector a1=2. 1 a2=-0. 1 a3=0. 3 a4=-1. 0 softmaxα1=0. 76α2=0. 08α3=0. 13α4=0. 03
anlp-05-transformers.pdf
Calculating Attention (2)Combine together value vectors (usually encoder states, like key vectors) by taking the weighted sum konoeigagakirai Value Vectors α1=0. 76α2=0. 08α3=0. 13α4=0. 03**** Use this in any part of the model you like
anlp-05-transformers.pdf
Transformers
anlp-05-transformers.pdf
A sequence-to-sequence model based entirely on attention Strong results on machine translation Fast: only matrix multiplications“Attention is All You Need” (Vaswani et al. 2017) Output Embedding Multi-Head Attention Feed Forward Feed Forward Multi-Head Attention Nx Nx Positional Encoding Positional Encoding Inputs Outputs (shifted right)Output Probabilities ⊕⊕Input Embedding Add & Norm Add & Norm Add & Norm Softmax Linear Add & Norm Add & Norm Masked Multi-Head Attention
anlp-05-transformers.pdf
Two Types of Transformers Output Embedding Multi-Head Attention Feed Forward Feed Forward Multi-Head Attention Nx Nx Positional Encoding Positional Encoding Inputs Outputs (shifted right)Output Probabilities ⊕⊕Input Embedding Add & Norm Add & Norm Add & Norm Softmax Linear Add & Norm Add & Norm Masked Multi-Head Attention Encoder-Decoder Model (e. g. T5, MBART)Decoder Only Model (e. g. GPT, LLa Ma) Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm
anlp-05-transformers.pdf
Core Transformer Concepts Positional encodings Multi-headed attention Masked attention Residual + layer normalization Feed-forward layer
anlp-05-transformers.pdf
Inputs: Generally split using subwords the books were improved the book _s were improv _ed Input Embedding: Looked up, like in previously discussed models(Review) Inputs and Embeddings Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm
anlp-05-transformers.pdf
Multi-head Attention
anlp-05-transformers.pdf
Intuition: Information from different parts of the sentence can be useful to disambiguate in different ways Intuition for Multi-heads Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm I run a small business I run a mile in 10 minutes The robber made a run for it The stocking had a runsyntax (nearby context)semantics (farther context)
anlp-05-transformers.pdf
Multi-head Attention Concept QK V * WQ * WK * WVSplit/rearrange to n attn inputs Run attn over each head attn() Concat and *WO Multiply by weights Multi Head(Q, K, V)=C o n c a t ( h e a d1,...,headh)WOwhere headi=A t t e n t i o n (QWQi,KWKi,VWVi)
anlp-05-transformers.pdf
Code Example def forward(self, query, key, value, mask=None): nbatches = query. size(0) # 1) Do all the linear projections query = self. W_q(query) key = self. W_k(key) value = self. W_v(value) # 2) Reshape to get h heads query = query. view(nbatches,-1, self. heads, self. d_k). transpose(1, 2) key = key. view(nbatches,-1, self. heads, self. d_k). transpose(1, 2) value = value. view(nbatches,-1, self. heads, self. d_k). transpose(1, 2) # 3) Apply attention on all the projected vectors in batch. x, self. attn = attention(query, key, value) # 4) "Concat" using a view and apply a final linear. x = ( x. transpose(1, 2) . contiguous() . view(nbatches,-1, self. h * self. d_k) ) return self. W_o(x)
anlp-05-transformers.pdf
What Happens w/ Multi-heads?Example from Vaswani et al. See also Bert Vis: https://github. com/jessevig/bertviz
anlp-05-transformers.pdf
Positional Encoding
anlp-05-transformers.pdf
The transformer model is purely attentional If embeddings were used, there would be no way to distinguish between identical words Positional Encoding Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm A big dog and a big cat A big dog and a big catwould be identical!Positional encodings add an embedding based on the word positionwbig + wpos2wbig + wpos6
anlp-05-transformers.pdf
Sinusoidal Encoding (Vaswani+ 2017, Kazemnejad 2019)Calculate each dimension with a sinusoidal functionp(i)t=f(t)(i):={sin(ωk·t),ifi=2kcos(ωk·t),ifi=2k+1whereωk=1100002k/d Why? So the dot product between two embeddings becomes higher relatively.
anlp-05-transformers.pdf
Learned Encoding (Shaw+ 2018)More simply, just create a learnable embedding Advantages: flexibility Disadvantages: impossible to extrapolate to longer sequences
anlp-05-transformers.pdf
Absolute vs. Relative Encodings (Shaw+ 2018)Absolute positional encodings add an encoding to the input in hope that relative position will be captured Relative positional encodings explicitly encode relative position
anlp-05-transformers.pdf
Rotary Positional Encodings (Ro PE) (Su+ 2021)Fundamental idea: we want the dot product of embeddings to result in a function of relative positionfq(xm,m)·fk(xn,n)=g(xm,xn,m-n)In summary, Ro PE uses trigonometry and imaginary numbers to come up with a function that satisfies this property RdΘ,mx=⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x1x2x3x4... xd-1xd⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⊗⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝cosmθ1cosmθ1cosmθ2cosmθ2... cosmθd2cosmθd2⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝-x2x1-x4x3...-xdxd-1⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⊗⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝sinmθ1sinmθ1sinmθ2sinmθ2... sinmθd2sinmθd2⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
anlp-05-transformers.pdf
Layer Normalization and Residual Connections
anlp-05-transformers.pdf
Reminder: Gradients and Training Instability In RNNs, we asked about how backdrop through a network causes gradients can vanish or explode The same issue occurs in multi-layer transformers!
anlp-05-transformers.pdf
Normalizes the outputs to be within a consistent range, preventing too much variance in scale of outputs Layer Normalization (Ba et al. 2016) Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm Layer Norm(x;g,b)=gσ(x)⊙(x-µ(x)) +bgainbiasvector meanµ(x)=1nn∑i=1xivector stddevσ(x)=√1nn∑i=1(xi-µ)2
anlp-05-transformers.pdf
RMSNorm (Zhang and Sennrich 2019)Simplifies Layer Norm by removing the mean and bias terms RMS(x)=√1nn∑i=1x2i RMSNorm(x)=x RMS(x)·g
anlp-05-transformers.pdf
Add an additive connection between the input and output Residual Connections Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm Residual(x,f)=f(x)+x Prevents vanishing gradients and allows f to learn the difference from the input Quiz: what are the implications for self-attention w/ and w/o residual connections?
anlp-05-transformers.pdf
Post-vs. Pre-Layer Norm (e. g. Xiong et al. 2020)Where should Layer Norm be applied? Before or after? Pre-layer-norm is better for gradient propagation post-Layer Normpre-Layer Norm
anlp-05-transformers.pdf
Feed Forward Layers
anlp-05-transformers.pdf
Extract combination features from the attended outputs Feed Forward Layers Masked Multi-Head Attention Feed Forward Nx Positional Encoding Inputs Output Probabilities ⊕Input Embedding Add & Norm Softmax Linear Add & Norm Linear1 Non-linearity Linear2f()FFN(x;W1,b1,W2,b2)=f(x W1+b1)W2+b2
anlp-05-transformers.pdf
Vaswani et al. : Re LU LLa Ma: Swish/Si LU (Hendricks and Gimpel 2016)Some Activation Functions in Transformers Re LU(x)=m a x ( 0,x) Swish(x;β)=x⊙σ(βx)
anlp-05-transformers.pdf
Optimization Tricks for Transformers
anlp-05-transformers.pdf
Transformers are Powerful but Fickle Optimization of models can be difficult, and transformers are more difficult than others! e. g. OPT-175 training logbook https://github. com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook. pdf
anlp-05-transformers.pdf
Optimizers for Transformers SGD: Update in the direction of reducing loss Adam: Add momentum turn and normalize by stddev of the outputs Adam w/ learning rate schedule (Vaswani et al. 2017): Adds a learning rate increase and decrease Adam W (Loshchilov and Hutter 2017): properly applies weight decay for regularization to Adamlrate=d-0. 5model·min(step-0. 5,s t e p∗warmupsteps-1. 5)
anlp-05-transformers.pdf
Low-Precision Training Training at full 32-bit precision can be costly Low-precision alternatives Image: Wikipedia
anlp-05-transformers.pdf
Checkpointing/Restarts Even through best efforts, training can go south — what to do? Monitor possible issues, e. g. through monitoring the norm of gradients If training crashes, roll back to previous checkpoint, shuffle data, and resume (Also, check your code)Image: OPT Log
anlp-05-transformers.pdf
Comparing Transformer Architectures
anlp-05-transformers.pdf
Original Transformer vs. LLa Ma Vaswani et al. LLa MANorm Position Post Pre Norm Type Layer Norm RMSNorm Non-linearity Re LUSi LUPositional Encoding Sinusoidal Ro PE
anlp-05-transformers.pdf
How Important is It?“Transformer” is Vaswani et al., “Transformer++” is (basically) LLa MA Image: Gu and Dao (2023) Stronger architecture is ≈10x more efficient!
anlp-05-transformers.pdf
Questions?
anlp-05-transformers.pdf
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card