Glossary¶
General terms¶
autoencoding models: see MLM
autoregressive models: see CLM
CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the next word. It’s usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep.
MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text.
multimodal: a task that combines texts with another kind of inputs (for instance images).
NLG: natural language generation, all tasks related to generating text ( for instance talk with transformers, translation)
NLP: natural language processing, a generic way to say “deal with texts”.
NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying the whole text, individual words)
pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or masking some words and trying to predict them (see MLM).
RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or summarization models (such as Bart or T5).
token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a punctuation symbol.
Model inputs¶
Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are detailed here alongside usage examples.
Input IDs¶
The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model.
Each tokenizer works differently but the underlying mechanism remains the same. Here’s an example using the BERT tokenizer, which is a WordPiece tokenizer:
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence = "A Titan RTX has 24GB of VRAM"
The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
>>> tokenized_sequence = tokenizer.tokenize(sequence)
The tokens are either words or subwords. Here for instance, “VRAM” wasn’t in the model vocabulary, so it’s been split in “V”, “RA” and “M”. To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for “RA” and “M”:
>>> print(tokenized_sequence)
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of huggingface/tokenizers for peak performance.
>>> inputs = tokenizer(sequence)
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key “input_ids”:
>>> encoded_sequence = inputs["input_ids"]
>>> print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
Note that the tokenizer automatically adds “special tokens” (if the associated model relies on them) which are special IDs the model sometimes uses.
If we decode the previous sequence of ids,
>>> decoded_sequence = tokenizer.decode(encoded_sequence)
we will see
>>> print(decoded_sequence)
[CLS] A Titan RTX has 24GB of VRAM [SEP]
because this is the way a BertModel
is going to expect its inputs.
Attention mask¶
The attention mask is an optional argument used when batching sequences together. This argument indicates to the model which tokens should be attended to, and which should not.
For example, consider these two sequences:
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence_a = "This is a short sequence."
>>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
>>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
>>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
The encoded versions have different lengths:
>>> len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
Therefore, we can’t be put then together in a same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one.
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this:
>>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
>>> padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating
the position of the padded indices so that the model does not attend to them. For the
BertTokenizer
, 1
indicates a value that should be attended to, while 0
indicates
a padded value. This attention mask is in the dictionary returned by the tokenizer under the key “attention_mask”:
>>> padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Token Type IDs¶
Some models’ purpose is to do sequence classification or question answering. These require two different sequences to
be joined in a single “input_ids” entry, which usually is performed with the help of special tokens, such as the classifier ([CLS]
) and separator ([SEP]
)
tokens. For example, the BERT model builds its two sequence input as such:
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to tokenizer
as two arguments (and
not a list, like before) like this:
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence_a = "HuggingFace is based in NYC"
>>> sequence_b = "Where is HuggingFace based?"
>>> encoded_dict = tokenizer(sequence_a, sequence_b)
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
which will return:
>>> print(decoded)
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
This is enough for some models to understand where one sequence ends and where another begins. However, other models, such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying the two types of sequence in the model.
The tokenizer returns this mask as the “token_type_ids” entry:
>>> encoded_dict['token_type_ids']
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
The first sequence, the “context” used for the question, has all its tokens represented by a 0
, whereas the
second sequence, corresponding to the “question”, has all its tokens represented by a 1
.
Some models, like XLNetModel
use an additional token represented by a 2
.
Position IDs¶
Contrary to RNNs that have the position of each token embedded within them,
transformers are unaware of the position of each token. Therefore, the position IDs (position_ids
) are used by the model to identify each token’s position in the list of tokens.
They are an optional parameter. If no position_ids
is passed to the model, the IDs are automatically created as absolute
positional embeddings.
Absolute positional embeddings are selected in the range [0, config.max_position_embeddings - 1]
. Some models
use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Feed Forward Chunking¶
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g.,
for bert-base-uncased
).
For an input of size [batch_size, sequence_length]
, the memory required to store the intermediate feed forward
embeddings [batch_size, sequence_length, config.intermediate_size]
can account for a large fraction of the memory
use. The authors of Reformer: The Efficient Transformer noticed that since the
computation is independent of the sequence_length
dimension, it is mathematically equivalent to compute the output
embeddings of both feed forward layers [batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n
individually and concat them afterward to [batch_size, sequence_length, config.hidden_size]
with
n = sequence_length
, which trades increased computation time against reduced memory use, but yields a
mathematically equivalent result.
For models employing the function apply_chunking_to_forward()
, the chunk_size
defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
complexity. If chunk_size
is set to 0, no feed forward chunking is done.