n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
SelfAttention(Q, K, V) = softmax QK √ d k V (11.6)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
As shown in Fig. 11 .3, the full set of self-attention scores represented by QK T constitute an all-pairs comparison between the keys and queries for each element of the input. In the case of causal language models in Chapter 9, we masked the upper triangular portion of this matrix (in Fig. ??) to eliminate information about future words since this would make the language modeling training task trivial. With bidirectional encoders we simply skip the mask, allowing the model to contextualize each token using information from the entire input.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Beyond this simple change, all of the other elements of the transformer architecture remain the same for bidirectional encoder models. Inputs to the model are segmented using subword tokenization and are combined with positional embeddings before being passed through a series of standard transformer blocks consisting of self-attention and feedforward layers augmented with residual connections and layer normalization, as shown in Fig. 11 .4. To make this more concrete, the original bidirectional transformer encoder model, BERT (Devlin et al., 2019) , consisted of the following:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
• A subword vocabulary consisting of 30,000 tokens generated using the Word-Piece algorithm (Schuster and Nakajima, 2012) , • Hidden layers of size of 768, • 12 layers of transformer blocks, with 12 multihead attention layers each.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
The result is a model with over 100M parameters. The use of WordPiece (one of the large family of subword tokenization algorithms that includes the BPE algorithm we saw in Chapter 2) means that BERT and its descendants are based on subword tokens rather than words. Every input sentence first has to be tokenized, and then all further processing takes place on subword tokens rather than words. This will require, as we'll see, that for some NLP tasks that require notions of words (like named entity tagging, or parsing) we will occasionally need to map subwords back to words.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Finally, a fundamental issue with transformers is that the size of the input layer dictates the complexity of model. Both the time and memory requirements in a transformer grow quadratically with the length of the input. It's necessary, therefore, to set a fixed input length that is long enough to provide sufficient context for the model to function and yet still be computationally tractable. For BERT, a fixed input size of 512 subword tokens was used.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
We trained causal transformer language models in Chapter 9 by making them iteratively predict the next word in a text. But eliminating the causal mask makes the guess-the-next-word language modeling task trivial since the answer is now directly available from the context, so we're in need of a new training scheme. Fortunately, the traditional learning objective suggests an approach that can be used to train bidirectional encoders. Instead of trying to predict the next word, the model learns to perform a fill-in-the-blank task, technically called the cloze task (Taylor, 1953) . To cloze task see this, let's return to the motivating example from Chapter 3. Instead of predicting which words are likely to come next in this example:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
Please turn your homework ____.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
we're asked to predict a missing item given the rest of the sentence.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
Please turn _____ homework in.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
That is, given an input sequence with one or more elements missing, the learning task is to predict the missing elements. More precisely, during training the model is deprived of one or more elements of an input sequence and must generate a probability distribution over the vocabulary for each of the missing items. We then use the cross-entropy loss from each of the model's predictions to drive the learning process.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
nan
nan
This approach can be generalized to any of a variety of methods that corrupt the training input and then asks the model to recover the original input. Examples of the kinds of manipulations that have been used include masks, substitutions, reorderings, deletions, and extraneous insertions into the training text.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
The original approach to training bidirectional encoders is called Masked Language Modeling (MLM) (Devlin et al., 2019) . As with the language model training meth-
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
ods we've already seen, MLM uses unannotated text from a large corpus. Here, the MLM model is presented with a series of sentences from the training corpus where a random sample of tokens from each training sequence is selected for use in the learning task. Once chosen, a token is used in one of three ways:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
• It is replaced with the unique vocabulary token [MASK] .
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
• It is replaced with another token from the vocabulary, randomly sampled based on token unigram probabilities. • It is left unchanged.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
In BERT, 15% of the input tokens in a training sequence are sampled for learning. Of these, 80% are replaced with [MASK] , 10% are replaced with randomly selected tokens, and the remaining 10% are left unchanged.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
The MLM training objective is to predict the original inputs for each of the masked tokens using a bidirectional encoder of the kind described in the last section. The cross-entropy loss from these predictions drives the training process for all the parameters in the model. Note that all of the input tokens play a role in the selfattention process, but only the sampled tokens are used for learning.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
More specifically, the original input sequence is first tokenized using a subword model. The sampled items which drive the learning process are chosen from among the set of tokenized inputs. Word embeddings for all of the tokens in the input are retrieved from the word embedding matrix and then combined with positional embeddings to form the input to the transformer.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
Figure 11 .5 Masked language model training. In this example, three of the input tokens are selected, two of which are masked and the third is replaced with an unrelated word. The probabilities assigned by the model to these three items are used as the training loss. (In this and subsequent figures we display the input as words rather than subword tokens; the reader should keep in mind that BERT and similar models actually use subword tokens instead.)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
Fig. 11 .5 illustrates this approach with a simple example. Here, long, thanks and the have been sampled from the training sequence, with the first two masked and the replaced with the randomly sampled token apricot. The resulting embeddings are passed through a stack of bidirectional transformer blocks. To produce a probability distribution over the vocabulary for each of the masked tokens, the output vector from the final transformer layer for each of the masked tokens is multiplied by a learned set of classification weights W V ∈ R |V |×d h and then through a softmax to yield the required predictions over the vocabulary.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
y i = softmax(W V h i )
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.1
Masking Words
With a predicted probability distribution for each masked item, we can use crossentropy to compute the loss for each masked item-the negative log probability assigned to the actual masked word, as shown in Fig. 11 .5. The gradients that form the basis for the weight updates are based on the average loss over the sampled learning items from a single training sequence (or batch of sequences).
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
For many NLP applications, the natural unit of interest may be larger than a single word (or token). Question answering, syntactic parsing, coreference and semantic role labeling applications all involve the identification and classification of constituents, or phrases. This suggests that a span-oriented masked learning objective might provide improved performance on such tasks.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
A span is a contiguous sequence of one or more words selected from a training text, prior to subword tokenization. In span-based masking, a set of randomly selected spans from a training sequence are chosen. In the SpanBERT work that originated this technique (Joshi et al., 2020) , a span length is first chosen by sampling from a geometric distribution that is biased towards shorter spans an with upper bound of 10. Given this span length, a starting location consistent with the desired span length and the length of the input is sampled uniformly.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
Once a span is chosen for masking, all the words within the span are substituted according to the same regime used in BERT: 80% of the time the span elements are substituted with the [MASK] token, 10% of the time they are replaced by randomly sampled words from the vocabulary, and 10% of the time they are left as is. Note that this substitution process is done at the span level-all the tokens in a given span are substituted using the same method. As with BERT, the total token substitution is limited to 15% of the training sequence input. Having selected and masked the training span, the input is passed through the standard transformer architecture to generate contextualized representations of the input tokens.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
Downstream span-based applications rely on span representations derived from the tokens within the span, as well as the start and end points, or the boundaries, of a span. Representations for these boundaries are typically derived from the first and last words of a span, the words immediately preceding and following the span, or some combination of them. The SpanBERT learning objective augments the MLM objective with a boundary oriented component called the Span Boundary Objective (SBO). The SBO relies on a model's ability to predict the words within a masked span from the words immediately preceding and following it. This prediction is made using the output vectors associated with the words that immediately precede and follow the span being masked, along with positional embedding that signals which word in the span is being predicted:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
L(x) = L MLM (x) + L SBO (x) (11.7) L SBO (x) = −logP(x|x s , x e , p x ) (11.8)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
where s denotes the position of the word before the span and e denotes the word after the end. The prediction for a given position i within the span is produced by concatenating the output embeddings for words s and e span boundary vectors with a positional embedding for position i and passing the result through a 2-layer feedforward network.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
s = FFNN([y s−1 ; y e+1 ; p i−s+1 ]) (11.9) z = softmax(Es) (11.10)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.2
Masking Spans
The final loss is the sum of the BERT MLM loss and the SBO loss. Fig. 11 .6 illustrates this with one of our earlier examples. Here the span selected is and thanks for which spans from position 3 to 5. The total loss associated with the masked token thanks is the sum of the cross-entropy loss generated from the prediction of thanks from the output y 4 , plus the cross-entropy loss from the prediction of thanks from the output vectors for y 2 , y 6 and the embedding for position 4 in the span. Figure 11 .6 Span-based language model training. In this example, a span of length 3 is selected for training and all of the words in the span are masked. The figure illustrates the loss computed for word thanks; the loss for the entire span is based on the loss for all three of the words in the span.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
The focus of masked-based learning is on predicting words from surrounding contexts with the goal of producing effective word-level representations. However, an important class of applications involves determining the relationship between pairs of sentences. These includes tasks like paraphrase detection (detecting if two sentences have similar meanings), entailment (detecting if the meanings of two sentences entail or contradict each other) or discourse coherence (deciding if two neighboring sentences form a coherent discourse).
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
To capture the kind of knowledge required for applications such as these, BERT introduced a second learning objective called Next Sentence Prediction (NSP). In
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
this task, the model is presented with pairs of sentences and is asked to predict whether each pair consists of an actual pair of adjacent sentences from the training corpus or a pair of unrelated sentences. In BERT, 50% of the training pairs consisted of positive pairs, and in the other 50% the second sentence of a pair was randomly selected from elsewhere in the corpus. The NSP loss is based on how well the model can distinguish true pairs from random pairs. To facilitate NSP training, BERT introduces two new tokens to the input representation (tokens that will prove useful for fine-tuning as well). After tokenizing the input with the subword model, the token [CLS] is prepended to the input sentence pair, and the token [SEP] is placed between the sentences and after the final token of the second sentence. Finally, embeddings representing the first and second segments of the input are added to the word and positional embeddings to allow the model to more easily distinguish the input sentences.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
During training, the output vector from the final layer associated with the [CLS] token represents the next sentence prediction. As with the MLM objective, a learned set of classification weights W NSP ∈ R 2×d h is used to produce a two-class prediction from the raw [CLS] vector.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
y i = softmax(W NSP h i )
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.3
Next Sentence Prediction
Cross entropy is used to compute the NSP loss for each sentence pair presented to the model. Fig. 11 .7 illustrates the overall NSP training setup. In BERT, the NSP loss was used in conjunction with the MLM training objective to form final loss. Figure 11 .7 An example of the NSP loss calculation.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.4
Training Regimes
The corpus used in training BERT and other early transformer-based language models consisted of an 800 million word corpus of book texts called BooksCorpus (Zhu et al., 2015 ) and a 2.5 Billion word corpus derived from the English Wikipedia, for a combined size of 3.3 Billion words. The BooksCorpus is no longer used (for intellectual property reasons), and in general, as we'll discuss later, state-of-the-art models employ corpora that are orders of magnitude larger than these early efforts. To train the original BERT models, pairs of sentences were selected from the training corpus according to the next sentence prediction 50/50 scheme. Pairs were sampled so that their combined length was less than the 512 token input. Tokens within these sentence pairs were then masked using the MLM approach with the combined loss from the MLM and NSP objectives used for a final loss. Approximately 40 passes (epochs) over the training data was required for the model to converge.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.4
Training Regimes
The result of this pretraining process consists of both learned word embeddings, as well as all the parameters of the bidirectional encoder that are used to produce contextual embeddings for novel inputs.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.5
Contextual Embeddings
Given a pretrained language model and a novel input sentence, we can think of the output of the model as constituting contextual embeddings for each token in the contextual embeddings input. These contextual embeddings can be used as a contextual representation of the meaning of the input token for any task requiring the meaning of word.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.5
Contextual Embeddings
Contextual embeddings are thus vectors representing some aspect of the meaning of a token in context. For example, given a sequence of input tokens x 1 , ..., x n , we can use the output vector y i from the final layer of the model as a representation of the meaning of token x i in the context of sentence x 1 , ..., x n . Or instead of just using the vector y i from the final layer of the model, it's common to compute a representation for x i by averaging the output tokens y i from each of the last four layers of the model.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.5
Contextual Embeddings
Just as we used static embeddings like word2vec to represent the meaning of words, we can use contextual embeddings as representations of word meanings in context for any task that might require a model of word meaning. Where static embeddings represent the meaning of word types (vocabulary entries), contextual embeddings represent the meaning of word tokens: instances of a particular word type in a particular context. Contextual embeddings can thus by used for tasks like measuring the semantic similarity of two words in context, and are useful in linguistic tasks that require models of word meaning.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.2
Training Bidirectional Encoders
11.2.5
Contextual Embeddings
In the next section, however, we'll see the most common use of these representations: as embeddings of word or even entire sentences that are the inputs to classifiers in the fine-tuning process for downstream NLP applications.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
nan
nan
The power of pretrained language models lies in their ability to extract generalizations from large amounts of text-generalizations that are useful for myriad downstream applications. To make practical use of these generalizations, we need to create interfaces from these models to downstream applications through a process called fine-tuning. Fine-tuning facilitates the creation of applications on top of prefine-tuning trained models through the addition of a small set of application-specific parameters. The fine-tuning process consists of using labeled data from the application to train these additional application-specific parameters. Typically, this training will either freeze or make only minimal adjustments to the pretrained language model parameters.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
nan
nan
The following sections introduce fine-tuning methods for the most common applications including sequence classification, sequence labeling, sentence-pair inference, and span-based operations.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.1
Sequence Classification
Sequence classification applications often represent an input sequence with a single consolidated representation. With RNNs, we used the hidden layer associated with the final input element to stand for the entire sequence. A similar approach is used with transformers. An additional vector is added to the model to stand for the entire sequence. This vector is sometimes called the sentence embedding since it refers sentence embedding to the entire sequence, although the term 'sentence embedding' is also used in other ways. In BERT, the [CLS] token plays the role of this embedding. This unique token is added to the vocabulary and is prepended to the start of all input sequences, both during pretraining and encoding. The output vector in the final layer of the model for the [CLS] input represents the entire input sequence and serves as the input to a classifier head, a logistic regression or neural network classifier that makes the classifier head relevant decision.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.1
Sequence Classification
As an example, let's return to the problem of sentiment classification. A simple approach to fine-tuning a classifier for this application involves learning a set of weights, W C , to map the output vector for the [CLS] token, y CLS to a set of scores over the possible sentiment classes. Assuming a three-way sentiment classification task (positive, negative, neutral) and dimensionality d h for the size of the language model hidden layers gives W C ∈ R 3×d h . Classification of unseen documents proceeds by passing the input text through the pretrained language model to generate y CLS , multiplying it by W C , and finally passing the resulting vector through a softmax.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.1
Sequence Classification
y = softmax(W C y CLS ) (11.11)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.1
Sequence Classification
Finetuning the values in W C requires supervised training data consisting of input sequences labeled with the appropriate class. Training proceeds in the usual way; cross-entropy loss between the softmax output and the correct answer is used to drive the learning that produces W C .
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.1
Sequence Classification
A key difference from what we've seen earlier with neural classifiers is that this loss can be used to not only learn the weights of the classifier, but also to update the weights for the pretrained language model itself. In practice, reasonable classification performance is typically achieved with only minimal changes to the language model parameters, often limited to updates over the final few layers of the transformer. Fig. 11 .8 illustrates this overall approach to sequence classification.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
As mentioned in Section 11.2.3, an important type of problem involves the classification of pairs of input sequences. Practical applications that fall into this class include logical entailment, paraphrase detection and discourse analysis.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
Fine-tuning an application for one of these tasks proceeds just as with pretraining using the NSP objective. During fine-tuning, pairs of labeled sentences from the supervised training data are presented to the model. As with sequence classification, the output vector associated with the prepended [CLS] token represents the model's view of the input pair. And as with NSP training, the two inputs are separated by the a [SEP] token. To perform classification, the [CLS] vector is multiplied by a set of learning classification weights and passed through a softmax to generate label predictions, which are then used to update the weights.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
As an example, let's consider an entailment classification task with the Multi-Genre Natural Language Inference (MultiNLI) dataset (Williams et al., 2018) . In the task of natural language inference or NLI, also called recognizing textual natural language inference entailment, a model is presented with a pair of sentences and must classify the relationship between their meanings. For example in the MultiNLI corpus, pairs of sentences are given one of 3 labels: entails, contradicts and neutral. These labels describe a relationship between the meaning of the first sentence (the premise) and the meaning of the second sentence (the hypothesis). Here are representative examples of each class from the corpus:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
• Neutral a: Jon walked back to the town to the smithy. b: Jon traveled back to his hometown.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
• Contradicts a: Tourist Information offices can be very helpful. b: Tourist Information offices are never of any help.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
• Entails a: I'm confused. b: Not all of it is very clear to me.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
A relationship of contradicts means that the premise contradicts the hypothesis; entails means that the premise entails the hypothesis; neutral means that neither is necessarily true. The meaning of these labels is looser than strict logical entailment or contradiction indicating that a typical human reading the sentences would most likely interpret the meanings in this way.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.2
Pair-Wise Sequence Classification
To fine-tune a classifier for the MultiNLI task, we pass the premise/hypothesis pairs through a bidirectional encoder as described above and use the output vector for the [CLS] token as the input to the classification head. As with ordinary sequence classification, this head provides the input to a three-way classifier that can be trained on the MultiNLI training corpus.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
Sequence labelling tasks, such as part-of-speech tagging or BIO-based named entity recognition, follow the same basic classification approach. Here, the final output vector corresponding to each input token is passed to a classifier that produces a softmax distribution over the possible set of tags. Again, assuming a simple classifier consisting of a single feedforward layer followed by a softmax, the set of weights to be learned for this additional layer is W K ∈ R k×d h , where k is the number of possible tags for the task. As with RNNs, a greedy approach, where the argmax tag for each token is taken as a likely answer, can be used to generate the final output tag sequence. Fig. 11 .9 illustrates an example of this approach.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
y i = softmax(W K z i ) (11.12) t i = argmax k (y i ) (11.13)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
Alternatively, the distribution over labels provided by the softmax for each input token can be passed to a conditional random field (CRF) layer which can take global tag-level transitions into account.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
[CLS] Janet will back the bill
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
Bidirectional Transformer Encoder NNP MD VB DT NN Figure 11 .9 Sequence labeling for part-of-speech tagging with a bidirectional transformer encoder. The output vector for each input token is passed to a simple k-way classifier.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
A complication with this approach arises from the use of subword tokenization such as WordPiece or Byte Pair Encoding. Supervised training data for tasks like named entity recognition (NER) is typically in the form of BIO tags associated with text segmented at the word level. For example the following sentence containing two named entities:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
[ LOC Mt. Sanitas ] is in [ LOC Sunshine Canyon] .
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
would have the following set of per-word BIO tags.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
(11.14) Mt.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
B-LOC
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
Sanitas I-LOC is O in O Sunshine B-LOC Canyon I-LOC . O
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
Unfortunately, the WordPiece tokenization for this sentence yields the following sequence of tokens which doesn't align directly with BIO tags in the ground truth annotation:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
'Mt ', '.', 'San', '##itas', 'is', 'in', 'Sunshine', 'Canyon' '.' To deal with this misalignment, we need a way to assign BIO tags to subword tokens during training and a corresponding way to recover word-level tags from subwords during decoding. For training, we can just assign the gold-standard tag associated with each word to all of the subword tokens derived from it.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.3
Sequence Labelling
For decoding, the simplest approach is to use the argmax BIO tag associated with the first subword token of a word. Thus, in our example, the BIO tag assigned to "Mt" would be assigned to "Mt." and the tag assigned to "San" would be assigned to "Sanitas", effectively ignoring the information in the tags assigned to "." and "##itas". More complex approaches combine the distribution of tag probabilities across the subwords in an attempt to find an optimal word-level tag.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
Span-oriented applications operate in a middle ground between sequence level and token level tasks. That is, in span-oriented applications the focus is on generating and operating with representations of contiguous sequences of tokens. Typical operations include identifying spans of interest, classifying spans according to some labeling scheme, and determining relations among discovered spans. Applications include named entity recognition, question answering, syntactic parsing, semantic role labeling and coreference resolution.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
Formally, given an input sequence x consisting of T tokens, (x 1 , x 2 , ..., x T ), a span is a contiguous sequence of tokens with start i and end j such that 1 <= i <= j <= T . This formulation results in a total set of spans equal to T (T −1)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
. For practical purposes, span-based models often impose an application-specific length limit L, so the legal spans are limited to those where j − i < L. In the following, we'll refer to the enumerated set of legal spans in x as S(x).
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
The first step in fine-tuning a pretrained language model for a span-based application using the contextualized input embeddings from the model to generate representations for all the spans in the input. Most schemes for representing spans make use of two primary components: representations of the span boundaries and summary representations of the contents of each span. To compute a unified span representation, we concatenate the boundary representations with the summary representation.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
In the simplest possible approach, we can use the contextual embeddings of the start and end tokens of a span as the boundaries, and the average of the output embeddings within the span as the summary representation.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
g i j = 1 ( j − i) + 1 j k=i h k (11.15) spanRep i j = [h i ; h j ; g i, j ] (11.16)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
A weakness of this approach is that it doesn't distinguish the use of a word's embedding as the beginning of a span from its use as the end of one. Therefore, more elaborate schemes for representing the span boundaries involve learned representations for start and end points through the use of two distinct feedforward networks:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
s i = FFNN start (h i ) (11.17) e j = FFNN end (h j ) (11.18) spanRep i j = [s i ; e j ; g i, j ] (11.19)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
Similarly, a simple average of the vectors in a span is unlikely to be an optimal representation of a span since it treats all of a span's embeddings as equally important. For many applications, a more useful representation would be centered around the head of the phrase corresponding to the span. One method for getting at such information in the absence of a syntactic parse is to use a standard self-attention layer to generate a span representation.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
g i j = SelfATTN(h i: j ) (11.20)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
Now, given span representations g for each span in S(x), classifiers can be finetuned to generate application-specific scores for various span-oriented tasks: binary span identification (is this a legitimate span of interest or not?), span classification (what kind of span is this?), and span relation classification (how are these two spans related?).
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
To ground this discussion, let's return to named entity recognition (NER). Given a scheme for representing spans and set of named entity types, a span-based approach to NER is a straightforward classification problem where each span in an input is assigned a class label. More formally, given an input sequence x, we want to assign a label y, from the set of valid NER labels, to each of the spans in S(x). Since most of the spans in a given input will not be named entities we'll add the label NULL to the set of types in Y . Figure 11 .10 A span-oriented approach to named entity classification. The figure only illustrates the computation for 2 spans corresponding to ground truth named entities. In reality, the network scores all of the
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
y i j = softmax(FFNN(g i j )) (11.21) Contextualized Embeddings (h) Bidirectional Transformer Encoder
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
T (T −1) 2
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
spans in the text. That is, all the unigrams, bigrams, trigrams, etc. up to the length limit.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
With this approach, fine-tuning entails using supervised training data to learn the parameters of the final classifier, as well as the weights used to generate the boundary representations, and the weights in the self-attention layer that generates the span content representation. During training, the model's predictions for all spans are compared to their gold-standard labels and cross-entropy loss is used to drive the training.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
During decoding, each span is scored using a softmax over the final classifier output to generate a distribution over the possible labels, with the argmax score for each span taken as the correct answer. Fig. 11 .10 illustrates this approach with an example. A variation on this scheme designed to improve precision adds a calibrated threshold to the labeling of a span as anything other than NULL.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.3
Transfer Learning through Fine-Tuning
11.3.4
Fine-tuning for Span-Based Applications
There are two significant advantages to a span-based approach to NER over a BIO-based per-word labeling approach. The first advantage is that BIO-based approaches are prone to a labeling mis-match problem. That is, every label in a longer named entity must be correct for an output to be judged correct. Returning to the example in Fig. 11 .10, the following labeling would be judged entirely wrong due to the incorrect label on the first item. Span-based approaches only have to make one classification for each span. The second advantage to span-based approaches is that they naturally accommodate embedded named entities. For example, in this example both United Airlines and United Airlines Holding are legitimate named entities. The BIO approach has no way of encoding this embedded structure. But the span-based approach can naturally label both since the spans are labeled separately.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
Large pretrained neural language models exhibit many of the potential harms discussed in Chapter 4 and Chapter 6. Many of these harms become realized when pretrained language models are fine-tuned to downstream tasks, particularly those involving text generation, such as in assistive technologies like web search query completion, or predictive typing for email (Olteanu et al., 2020) .
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
For example, language models can generate toxic language. Gehman et al. (2020) show that many kinds of completely non-toxic prompts can nonetheless lead large language models to output hate speech and abuse. Brown et al. (2020) and Sheng et al. (2019) showed that large language models generate sentences displaying negative attitudes toward minority identities such as being Black or gay.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
Indeed, language models are biased in a number of ways by the distributions of their training data. Gehman et al. (2020) shows that large language model training datasets include toxic text scraped from banned sites. In addition to problems of toxicity, internet data is disproportionately generated by authors from developed countries, and many large language models train on data from Reddit, whose authors skew male and young. Such biased population samples likely skew the resulting generation away from the perspectives or topics of underrepresented populations. Furthermore, language models can amplify demographic and other biases in training data, just as we saw for embedding models in Chapter 6.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
Language models can also be a tool for generating text for misinformation, phishing, radicalization, and other socially harmful activities (Brown et al., 2020) . McGuffie and Newhouse (2020) show how large language models generate text that emulates online extremists, with the risk of amplifying extremist movements and their attempt to radicalize and recruit.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
Finally, there are important privacy issues. Language models, like other machine learning models, can leak information about their training data. It is thus possible for an adversary to extract individual training-data phrases from a language model such as an individual person's name, phone number, and address (Henderson et al. 2017 , Carlini et al. 2020 . This is a problem if large language models are trained on private datasets such has electronic health records (EHRs). Mitigating all these harms is an important but unsolved research question in NLP. Extra pretraining (Gururangan et al., 2020) on non-toxic subcorpora seems to reduce a language model's tendency to generate toxic language somewhat (Gehman et al., 2020) . And analyzing the data used to pretrain large language models is important to understand toxicity and bias in generation, as well as privacy, making it extremely important that language models include datasheets (page ??) or model cards (page ??) giving full replicable information on the corpora used to train them.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
This chapter has introduced the topic of transfer learning from pretrained language models. Here's a summary of the main points that we covered:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.6
Potential Harms from Language Models
nan
nan
• Bidirectional encoders can be used to generate contextualized representations of input embeddings using the entire input context. • Pretrained language models based on bidirectional encoders can be learned using a masked language model objective where a model is trained to guess the missing information from an input. • Pretrained language models can be fine-tuned for specific applications by adding lightweight classifier layers on top of the outputs of the pretrained model.