title
stringlengths
17
126
author
stringlengths
3
21
date
stringlengths
11
18
local
stringlengths
2
59
tags
stringlengths
2
76
URL
stringlengths
30
87
content
stringlengths
1.11k
108k
How to train a new language model from scratch using Transformers and Tokenizers
julien-c
February 14, 2020
how-to-train
guide, nlp
https://huggingface.co/blog/how-to-train
# How to train a new language model from scratch using Transformers and Tokenizers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> Over the past few months, we made several improvements to our [`transformers`](https://github.com/huggingface/transformers) and [`tokenizers`](https://github.com/huggingface/tokenizers) libraries, with the goal of making it easier than ever to **train a new language model from scratch**. In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the same number of layers & heads as DistilBERT – on **Esperanto**. We’ll then fine-tune the model on a downstream task of part-of-speech tagging. Esperanto is a *constructed language* with a goal of being easy to learn. We pick it for this demo for several reasons: - it is a relatively low-resource language (even though it’s spoken by ~2 million people) so this demo is less boring than training one more English model 😁 - its grammar is highly regular (e.g. all common nouns end in -o, all adjectives in -a) so we should get interesting linguistic results even on a small dataset. - finally, the overarching goal at the foundation of the language is to bring people closer (fostering world peace and international understanding) which one could argue is aligned with the goal of the NLP community 💚 > N.B. You won’t need to understand Esperanto to understand this post, but if you do want to learn it, [Duolingo](https://www.duolingo.com/enroll/eo/en/Learn-Esperanto) has a nice course with 280k active learners. Our model is going to be called… wait for it… **EsperBERTo** 😂 <img src="/blog/assets/01_how-to-train/eo.svg" alt="Esperanto flag" style="margin: auto; display: block; width: 260px;"> ## 1. Find a dataset First, let us find a corpus of text in Esperanto. Here we’ll use the Esperanto portion of the [OSCAR corpus](https://traces1.inria.fr/oscar/) from INRIA. OSCAR is a huge multilingual corpus obtained by language classification and filtering of [Common Crawl](https://commoncrawl.org/) dumps of the Web. <img src="/blog/assets/01_how-to-train/oscar.png" style="margin: auto; display: block; width: 260px;"> The Esperanto portion of the dataset is only 299M, so we’ll concatenate with the Esperanto sub-corpus of the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download), which is comprised of text from diverse sources like news, literature, and wikipedia. The final training corpus has a size of 3 GB, which is still small – for your model, you will get better results the more data you can get to pretrain on. ## 2. Train a tokenizer We choose to train a byte-level Byte-pair encoding tokenizer (the same as GPT-2), with the same special tokens as RoBERTa. Let’s arbitrarily pick its size to be 52,000. We recommend training a byte-level BPE (rather than let’s say, a WordPiece tokenizer like BERT) because it will start building its vocabulary from an alphabet of single bytes, so all words will be decomposable into tokens (no more `<unk>` tokens!). ```python #! pip install tokenizers from pathlib import Path from tokenizers import ByteLevelBPETokenizer paths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")] # Initialize a tokenizer tokenizer = ByteLevelBPETokenizer() # Customize training tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) # Save files to disk tokenizer.save_model(".", "esperberto") ``` And here’s a slightly accelerated capture of the output: ![tokenizers](assets/01_how-to-train/tokenizers-fast.gif) <small>On our dataset, training took about ~5 minutes.</small> 🔥🔥 Wow, that was fast! ⚡️🔥 We now have both a `vocab.json`, which is a list of the most frequent tokens ranked by frequency, and a `merges.txt` list of merges. ```json { "<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, "<mask>": 4, "!": 5, "\"": 6, "#": 7, "$": 8, "%": 9, "&": 10, "'": 11, "(": 12, ")": 13, # ... } # merges.txt l a Ġ k o n Ġ la t a Ġ e Ġ d Ġ p # ... ``` What is great is that our tokenizer is optimized for Esperanto. Compared to a generic tokenizer trained for English, more native words are represented by a single, unsplit token. Diacritics, i.e. accented characters used in Esperanto – `ĉ`, `ĝ`, `ĥ`, `ĵ`, `ŝ`, and `ŭ` – are encoded natively. We also represent sequences in a more efficient manner. Here on this corpus, the average length of encoded sequences is ~30% smaller as when using the pretrained GPT-2 tokenizer. Here’s how you can use it in `tokenizers`, including handling the RoBERTa special tokens – of course, you’ll also be able to use it directly from `transformers`. ```python from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "./models/EsperBERTo-small/vocab.json", "./models/EsperBERTo-small/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) print( tokenizer.encode("Mi estas Julien.") ) # Encoding(num_tokens=7, ...) # tokens: ['<s>', 'Mi', 'Ġestas', 'ĠJuli', 'en', '.', '</s>'] ``` ## 3. Train a language model from scratch **Update:** The associated Colab notebook uses our new [`Trainer`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) directly, instead of through a script. Feel free to pick the approach you like best. We will now train our language model using the [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py) script from `transformers` (newly renamed from `run_lm_finetuning.py` as it now supports training from scratch more seamlessly). Just remember to leave `--model_name_or_path` to `None` to train from scratch vs. from an existing model or checkpoint. > We’ll train a RoBERTa-like model, which is a BERT-like with a couple of changes (check the [documentation](https://huggingface.co/transformers/model_doc/roberta.html) for more details). As the model is BERT-like, we’ll train it on a task of *Masked language modeling*, i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset. This is taken care of by the example script. We just need to do two things: - implement a simple subclass of `Dataset` that loads data from our text files - Depending on your use case, you might not even need to write your own subclass of Dataset, if one of the provided examples (`TextDataset` and `LineByLineTextDataset`) works – but there are lots of custom tweaks that you might want to add based on what your corpus looks like. - Choose and experiment with different sets of hyperparameters. Here’s a simple version of our EsperantoDataset. ```python from torch.utils.data import Dataset class EsperantoDataset(Dataset): def __init__(self, evaluate: bool = False): tokenizer = ByteLevelBPETokenizer( "./models/EsperBERTo-small/vocab.json", "./models/EsperBERTo-small/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) # or use the RobertaTokenizer from `transformers` directly. self.examples = [] src_files = Path("./data/").glob("*-eval.txt") if evaluate else Path("./data/").glob("*-train.txt") for src_file in src_files: print("🔥", src_file) lines = src_file.read_text(encoding="utf-8").splitlines() self.examples += [x.ids for x in tokenizer.encode_batch(lines)] def __len__(self): return len(self.examples) def __getitem__(self, i): # We’ll pad at the batch level. return torch.tensor(self.examples[i]) ``` If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step. Here is one specific set of **hyper-parameters and arguments** we pass to the script: ``` --output_dir ./models/EsperBERTo-small-v1 --model_type roberta --mlm --config_name ./models/EsperBERTo-small --tokenizer_name ./models/EsperBERTo-small --do_train --do_eval --learning_rate 1e-4 --num_train_epochs 5 --save_total_limit 2 --save_steps 2000 --per_gpu_train_batch_size 16 --evaluate_during_training --seed 42 ``` As usual, pick the largest batch size you can fit on your GPU(s). **🔥🔥🔥 Let’s start training!! 🔥🔥🔥** Here you can check our Tensorboard for [one particular set of hyper-parameters](https://tensorboard.dev/experiment/8AjtzdgPR1qG6bDIe1eKfw/#scalars): [![tb](assets/01_how-to-train/tensorboard.png)](https://tensorboard.dev/experiment/8AjtzdgPR1qG6bDIe1eKfw/#scalars) > Our example scripts log into the Tensorboard format by default, under `runs/`. Then to view your board just run `tensorboard dev upload --logdir runs` – this will set up [tensorboard.dev](https://tensorboard.dev/), a Google-managed hosted version that lets you share your ML experiment with anyone. ## 4. Check that the LM actually trained Aside from looking at the training and eval losses going down, the easiest way to check whether our language model is learning anything interesting is via the `FillMaskPipeline`. Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, `<mask>`) and return a list of the most probable filled sequences, with their probabilities. ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./models/EsperBERTo-small", tokenizer="./models/EsperBERTo-small" ) # The sun <mask>. # => result = fill_mask("La suno <mask>.") # {'score': 0.2526160776615143, 'sequence': '<s> La suno brilis.</s>', 'token': 10820} # {'score': 0.0999930202960968, 'sequence': '<s> La suno lumis.</s>', 'token': 23833} # {'score': 0.04382849484682083, 'sequence': '<s> La suno brilas.</s>', 'token': 15006} # {'score': 0.026011141017079353, 'sequence': '<s> La suno falas.</s>', 'token': 7392} # {'score': 0.016859788447618484, 'sequence': '<s> La suno pasis.</s>', 'token': 4552} ``` Ok, simple syntax/grammar works. Let’s try a slightly more interesting prompt: ```python fill_mask("Jen la komenco de bela <mask>.") # This is the beginning of a beautiful <mask>. # => # { # 'score':0.06502299010753632 # 'sequence':'<s> Jen la komenco de bela vivo.</s>' # 'token':1099 # } # { # 'score':0.0421181358397007 # 'sequence':'<s> Jen la komenco de bela vespero.</s>' # 'token':5100 # } # { # 'score':0.024884626269340515 # 'sequence':'<s> Jen la komenco de bela laboro.</s>' # 'token':1570 # } # { # 'score':0.02324388362467289 # 'sequence':'<s> Jen la komenco de bela tago.</s>' # 'token':1688 # } # { # 'score':0.020378097891807556 # 'sequence':'<s> Jen la komenco de bela festo.</s>' # 'token':4580 # } ``` > “**Jen la komenco de bela tago**”, indeed! With more complex prompts, you can probe whether your language model captured more semantic knowledge or even some sort of (statistical) common sense reasoning. ## 5. Fine-tune your LM on a downstream task We now can fine-tune our new Esperanto language model on a downstream task of **Part-of-speech tagging.** As mentioned before, Esperanto is a highly regular language where word endings typically condition the grammatical part of speech. Using a dataset of annotated Esperanto POS tags formatted in the CoNLL-2003 format (see example below), we can use the [`run_ner.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) script from `transformers`. > POS tagging is a token classification task just as NER so we can just use the exact same script. ![conll](assets/01_how-to-train/conll-2003.png) Again, here’s the hosted **[Tensorboard](https://tensorboard.dev/experiment/lOZn2wOWQo6ixpwtWyyDfQ/#scalars)** for this fine-tuning. We train for 3 epochs using a batch size of 64 per GPU. Training and eval losses converge to small residual values as the task is rather easy (the language is regular) – it’s still fun to be able to train it end-to-end 😃. This time, let’s use a `TokenClassificationPipeline`: ```python from transformers import TokenClassificationPipeline, pipeline MODEL_PATH = "./models/EsperBERTo-small-pos/" nlp = pipeline( "ner", model=MODEL_PATH, tokenizer=MODEL_PATH, ) # or instantiate a TokenClassificationPipeline directly. nlp("Mi estas viro kej estas tago varma.") # {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'} # {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'} # {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'} # {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'} # {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'} ``` **Looks like it worked! 🔥** <small>For a more challenging dataset for NER, <a href="https://github.com/stefan-it">@stefan-it</a> recommended that we could train on the silver standard dataset from WikiANN</small> ## 6. Share your model 🎉 Finally, when you have a nice model, please think about sharing it with the community: - upload your model using the CLI: `transformers-cli upload` - write a README.md model card and add it to the repository under `model_cards/`. Your model card should ideally include: - a model description, - training params (dataset, preprocessing, hyperparameters), - evaluation results, - intended uses & limitations - whatever else is helpful! 🤓 ### **TADA!** ➡️ Your model has a page on https://huggingface.co/models and everyone can load it using `AutoModel.from_pretrained("username/model_name")`. [![tb](assets/01_how-to-train/model_page.png)](https://huggingface.co/julien-c/EsperBERTo-small) If you want to take a look at models in different languages, check https://huggingface.co/models [![all models](https://huggingface.co/front/thumbnails/models.png)](https://huggingface.co/models) ## Thank you! ![](assets/01_how-to-train/EsperBERTo-thumbnail-v2.png)
How to generate text: using different decoding methods for language generation with Transformers
patrickvonplaten
March, 2020
how-to-generate
guide, nlp
https://huggingface.co/blog/how-to-generate
# How to generate text: using different decoding methods for language generation with Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **Note**: Edited on July 2023 with up-to-date references and examples. ## Introduction In recent years, there has been an increasing interest in open-ended language generation thanks to the rise of large transformer-based language models trained on millions of webpages, including OpenAI's [ChatGPT](https://openai.com/blog/chatgpt) and Meta's [LLaMA](https://ai.meta.com/blog/large-language-model-llama-meta-ai/). The results on conditioned open-ended language generation are impressive, having shown to [generalize to new tasks](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html), [handle code](https://huggingface.co/blog/starcoder), or [take non-text data as input](https://openai.com/research/whisper). Besides the improved transformer architecture and massive unsupervised training data, **better decoding methods** have also played an important role. This blog post gives a brief overview of different decoding strategies and more importantly shows how *you* can implement them with very little effort using the popular `transformers` library\! All of the following functionalities can be used for **auto-regressive** language generation ([here](http://jalammar.github.io/illustrated-gpt2/) a refresher). In short, *auto-regressive* language generation is based on the assumption that the probability distribution of a word sequence can be decomposed into the product of conditional next word distributions: $$ P(w_{1:T} | W_0 ) = \prod_{t=1}^T P(w_{t} | w_{1: t-1}, W_0) \text{ ,with } w_{1: 0} = \emptyset, $$ and \\(W_0\\) being the initial *context* word sequence. The length \\(T\\) of the word sequence is usually determined *on-the-fly* and corresponds to the timestep \\(t=T\\) the EOS token is generated from \\(P(w_{t} | w_{1: t-1}, W_{0})\\). We will give a tour of the currently most prominent decoding methods, mainly *Greedy search*, *Beam search*, and *Sampling*. Let's quickly install transformers and load the model. We will use GPT2 in PyTorch for demonstration, but the API is 1-to-1 the same for TensorFlow and JAX. ``` python !pip install -q transformers ``` ``` python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch_device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained("gpt2") # add the EOS token as PAD token to avoid warnings model = AutoModelForCausalLM.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id).to(torch_device) ``` ## Greedy Search Greedy search is the simplest decoding method. It selects the word with the highest probability as its next word: \\(w_t = argmax_{w}P(w | w_{1:t-1})\\) at each timestep \\(t\\). The following sketch shows greedy search. <img src="/blog/assets/02_how-to-generate/greedy_search.png" alt="greedy search" style="margin: auto; display: block;"> Starting from the word \\(\text{"The"},\\) the algorithm greedily chooses the next word of highest probability \\(\text{"nice"}\\) and so on, so that the final generated word sequence is \\((\text{"The"}, \text{"nice"}, \text{"woman"})\\) having an overall probability of \\(0.5 \times 0.4 = 0.2\\) . In the following we will generate word sequences using GPT2 on the context \\((\text{"I"}, \text{"enjoy"}, \text{"walking"}, \text{"with"}, \text{"my"}, \text{"cute"}, \text{"dog"})\\). Let's see how greedy search can be used in `transformers`: ``` python # encode context the generation is conditioned on model_inputs = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').to(torch_device) # generate 40 new tokens greedy_output = model.generate(**model_inputs, max_new_tokens=40) print("Output:\n" + 100 * '-') print(tokenizer.decode(greedy_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog. I'm not sure ``` Alright\! We have generated our first short text with GPT2 😊. The generated words following the context are reasonable, but the model quickly starts repeating itself\! This is a very common problem in language generation in general and seems to be even more so in greedy and beam search - check out [Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424) and [Shao et al., 2017](https://arxiv.org/abs/1701.03185). The major drawback of greedy search though is that it misses high probability words hidden behind a low probability word as can be seen in our sketch above: The word \\(\text{"has"}\\) with its high conditional probability of \\(0.9\\) is hidden behind the word \\(\text{"dog"}\\), which has only the second-highest conditional probability, so that greedy search misses the word sequence \\(\text{"The"}, \text{"dog"}, \text{"has"}\\) . Thankfully, we have beam search to alleviate this problem\! ## Beam search Beam search reduces the risk of missing hidden high probability word sequences by keeping the most likely `num_beams` of hypotheses at each time step and eventually choosing the hypothesis that has the overall highest probability. Let's illustrate with `num_beams=2`: <img src="/blog/assets/02_how-to-generate/beam_search.png" alt="beam search" style="margin: auto; display: block;"> At time step 1, besides the most likely hypothesis \\((\text{"The"}, \text{"nice"})\\), beam search also keeps track of the second most likely one \\((\text{"The"}, \text{"dog"})\\). At time step 2, beam search finds that the word sequence \\((\text{"The"}, \text{"dog"}, \text{"has"})\\), has with \\(0.36\\) a higher probability than \\((\text{"The"}, \text{"nice"}, \text{"woman"})\\), which has \\(0.2\\) . Great, it has found the most likely word sequence in our toy example\! Beam search will always find an output sequence with higher probability than greedy search, but is not guaranteed to find the most likely output. Let's see how beam search can be used in `transformers`. We set `num_beams > 1` and `early_stopping=True` so that generation is finished when all beam hypotheses reached the EOS token. ``` python # activate beam search and early_stopping beam_output = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, early_stopping=True ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ever be able to walk with him again. I'm not sure ``` While the result is arguably more fluent, the output still includes repetitions of the same word sequences. One of the available remedies is to introduce *n-grams* (*a.k.a* word sequences of n words) penalties as introduced by [Paulus et al. (2017)](https://arxiv.org/abs/1705.04304) and [Klein et al. (2017)](https://arxiv.org/abs/1701.02810). The most common *n-grams* penalty makes sure that no *n-gram* appears twice by manually setting the probability of next words that could create an already seen *n-gram* to 0. Let's try it out by setting `no_repeat_ngram_size=2` so that no *2-gram* appears twice: ``` python # set no_repeat_ngram_size to 2 beam_output = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, no_repeat_ngram_size=2, early_stopping=True ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time for me to ``` Nice, that looks much better\! We can see that the repetition does not appear anymore. Nevertheless, *n-gram* penalties have to be used with care. An article generated about the city *New York* should not use a *2-gram* penalty or otherwise, the name of the city would only appear once in the whole text\! Another important feature about beam search is that we can compare the top beams after generation and choose the generated beam that fits our purpose best. In `transformers`, we simply set the parameter `num_return_sequences` to the number of highest scoring beams that should be returned. Make sure though that `num_return_sequences <= num_beams`\! ``` python # set return_num_sequences > 1 beam_outputs = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) # now we have 3 output sequences print("Output:\n" + 100 * '-') for i, beam_output in enumerate(beam_outputs): print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True))) ``` ``` Output: ---------------------------------------------------------------------------------------------------- 0: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time for me to 1: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with her again. I've been thinking about this for a while now, and I think it's time for me to 2: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's a good idea to 3: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time to take a 4: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's a good idea. ``` As can be seen, the five beam hypotheses are only marginally different to each other - which should not be too surprising when using only 5 beams. In open-ended generation, a couple of reasons have been brought forward why beam search might not be the best possible option: - Beam search can work very well in tasks where the length of the desired generation is more or less predictable as in machine translation or summarization - see [Murray et al. (2018)](https://arxiv.org/abs/1808.10006) and [Yang et al. (2018)](https://arxiv.org/abs/1808.09582). But this is not the case for open-ended generation where the desired output length can vary greatly, e.g. dialog and story generation. - We have seen that beam search heavily suffers from repetitive generation. This is especially hard to control with *n-gram*- or other penalties in story generation since finding a good trade-off between inhibiting repetition and repeating cycles of identical *n-grams* requires a lot of finetuning. - As argued in [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751), high quality human language does not follow a distribution of high probability next words. In other words, as humans, we want generated text to surprise us and not to be boring/predictable. The authors show this nicely by plotting the probability, a model would give to human text vs. what beam search does. ![alt text](https://blog.fastforwardlabs.com/images/2019/05/Screen_Shot_2019_05_08_at_3_06_36_PM-1557342561886.png) So let's stop being boring and introduce some randomness 🤪. ## Sampling In its most basic form, sampling means randomly picking the next word \\(w_t\\) according to its conditional probability distribution: $$ w_t \sim P(w|w_{1:t-1}) $$ Taking the example from above, the following graphic visualizes language generation when sampling. <img src="/blog/assets/02_how-to-generate/sampling_search.png" alt="sampling search" style="margin: auto; display: block;"> It becomes obvious that language generation using sampling is not *deterministic* anymore. The word \\((\text{"car"})\\) is sampled from the conditioned probability distribution \\(P(w | \text{"The"})\\), followed by sampling \\((\text{"drives"})\\) from \\(P(w | \text{"The"}, \text{"car"})\\) . In `transformers`, we set `do_sample=True` and deactivate *Top-K* sampling (more on this later) via `top_k=0`. In the following, we will fix the random seed for illustration purposes. Feel free to change the `set_seed` argument to obtain different results, or to remove it for non-determinism. ``` python # set seed to reproduce results. Feel free to change the seed though to get different results from transformers import set_seed set_seed(42) # activate sampling and deactivate top_k by setting top_k sampling to 0 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=0 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be wondered for a mere minute or so at this point). ``` Interesting\! The text seems alright - but when taking a closer look, it is not very coherent and doesn't sound like it was written by a human. That is the big problem when sampling word sequences: The models often generate incoherent gibberish, *cf.* [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751). A trick is to make the distribution \\(P(w|w_{1:t-1})\\) sharper (increasing the likelihood of high probability words and decreasing the likelihood of low probability words) by lowering the so-called `temperature` of the [softmax](https://en.wikipedia.org/wiki/Softmax_function#Smooth_arg_max). An illustration of applying temperature to our example from above could look as follows. <img src="/blog/assets/02_how-to-generate/sampling_search_with_temp.png" alt="sampling temp search" style="margin: auto; display: block;"> The conditional next word distribution of step \\(t=1\\) becomes much sharper leaving almost no chance for word \\((\text{"car"})\\) to be selected. Let's see how we can cool down the distribution in the library by setting `temperature=0.6`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # use temperature to decrease the sensitivity to low probability candidates sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=0, temperature=0.6, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I don't like to chew on it. I like to eat it and not chew on it. I like to be able to walk with my dog." So how did you decide ``` OK. There are less weird n-grams and the output is a bit more coherent now\! While applying temperature can make a distribution less random, in its limit, when setting `temperature` \\(\to 0\\), temperature scaled sampling becomes equal to greedy decoding and will suffer from the same problems as before. ### Top-K Sampling [Fan et. al (2018)](https://arxiv.org/pdf/1805.04833.pdf) introduced a simple, but very powerful sampling scheme, called ***Top-K*** sampling. In *Top-K* sampling, the *K* most likely next words are filtered and the probability mass is redistributed among only those *K* next words. GPT2 adopted this sampling scheme, which was one of the reasons for its success in story generation. We extend the range of words used for both sampling steps in the example above from 3 words to 10 words to better illustrate *Top-K* sampling. <img src="/blog/assets/02_how-to-generate/top_k_sampling.png" alt="Top K sampling" style="margin: auto; display: block;"> Having set \\(K = 6\\), in both sampling steps we limit our sampling pool to 6 words. While the 6 most likely words, defined as \\(V_{\text{top-K}}\\) encompass only *ca.* two-thirds of the whole probability mass in the first step, it includes almost all of the probability mass in the second step. Nevertheless, we see that it successfully eliminates the rather weird candidates \\((\text{``not"}, \text{``the"}, \text{``small"}, \text{``told"})\\) in the second sampling step. Let's see how *Top-K* can be used in the library by setting `top_k=50`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k to 50 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=50 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. (One reason I asked this for a few months back is that I had a ``` Not bad at all\! The text is arguably the most *human-sounding* text so far. One concern though with *Top-K* sampling is that it does not dynamically adapt the number of words that are filtered from the next word probability distribution \\(P(w|w_{1:t-1})\\). This can be problematic as some words might be sampled from a very sharp distribution (distribution on the right in the graph above), whereas others from a much more flat distribution (distribution on the left in the graph above). In step \\(t=1\\), *Top-K* eliminates the possibility to sample \\((\text{"people"}, \text{"big"}, \text{"house"}, \text{"cat"})\\), which seem like reasonable candidates. On the other hand, in step \\(t=2\\) the method includes the arguably ill-fitted words \\((\text{"down"}, \text{"a"})\\) in the sample pool of words. Thus, limiting the sample pool to a fixed size *K* could endanger the model to produce gibberish for sharp distributions and limit the model's creativity for flat distribution. This intuition led [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751) to create ***Top-p***- or ***nucleus***-sampling. ### Top-p (nucleus) sampling Instead of sampling only from the most likely *K* words, in *Top-p* sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability *p*. The probability mass is then redistributed among this set of words. This way, the size of the set of words (*a.k.a* the number of words in the set) can dynamically increase and decrease according to the next word's probability distribution. Ok, that was very wordy, let's visualize. <img src="/blog/assets/02_how-to-generate/top_p_sampling.png" alt="Top p sampling" style="margin: auto; display: block;"> Having set \\(p=0.92\\), *Top-p* sampling picks the *minimum* number of words to exceed together \\(p=92\%\\) of the probability mass, defined as \\(V_{\text{top-p}}\\). In the first example, this included the 9 most likely words, whereas it only has to pick the top 3 words in the second example to exceed 92%. Quite simple actually\! It can be seen that it keeps a wide range of words where the next word is arguably less predictable, *e.g.* \\(P(w | \text{"The''})\\), and only a few words when the next word seems more predictable, *e.g.* \\(P(w | \text{"The"}, \text{"car"})\\). Alright, time to check it out in `transformers`\! We activate *Top-p* sampling by setting `0 < top_p < 1`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k to 50 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_p=0.92, top_k=0 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be my yearning for such a spacious screen on my desk ``` Great, that sounds like it could have been written by a human. Well, maybe not quite yet. While in theory, *Top-p* seems more elegant than *Top-K*, both methods work well in practice. *Top-p* can also be used in combination with *Top-K*, which can avoid very low ranked words while allowing for some dynamic selection. Finally, to get multiple independently sampled outputs, we can *again* set the parameter `num_return_sequences > 1`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3 sample_outputs = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=3, ) print("Output:\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) ``` ``` Output: ---------------------------------------------------------------------------------------------------- 0: I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. When I finally looked at this for a few moments, I immediately thought, " 1: I enjoy walking with my cute dog. The only time I felt like walking was when I was working, so it was awesome for me. I didn't want to walk for days. I am really curious how she can walk with me 2: I enjoy walking with my cute dog (Chama-I-I-I-I-I), and I really enjoy running. I play in a little game I play with my brother in which I take pictures of our houses. ``` Cool, now you should have all the tools to let your model write your stories with `transformers`! ## Conclusion As *ad-hoc* decoding methods, *top-p* and *top-K* sampling seem to produce more fluent text than traditional *greedy* - and *beam* search on open-ended language generation. There is evidence that the apparent flaws of *greedy* and *beam* search - mainly generating repetitive word sequences - are caused by the model (especially the way the model is trained), rather than the decoding method, *cf.* [Welleck et al. (2019)](https://arxiv.org/pdf/1908.04319.pdf). Also, as demonstrated in [Welleck et al. (2020)](https://arxiv.org/abs/2002.02492), it looks as *top-K* and *top-p* sampling also suffer from generating repetitive word sequences. In [Welleck et al. (2019)](https://arxiv.org/pdf/1908.04319.pdf), the authors show that according to human evaluations, *beam* search can generate more fluent text than *Top-p* sampling, when adapting the model's training objective. Open-ended language generation is a rapidly evolving field of research and as it is often the case there is no one-size-fits-all method here, so one has to see what works best in one's specific use case. Fortunately, *you* can try out all the different decoding methods in `transfomers` 🤗 -- you can have an overview of the available methods [here](https://huggingface.co/docs/transformers/generation_strategies#decoding-strategies). Thanks to everybody, who has contributed to the blog post: Alexander Rush, Julien Chaumand, Thomas Wolf, Victor Sanh, Sam Shleifer, Clément Delangue, Yacine Jernite, Oliver Åstrand and John de Wasseige. ## Appendix `generate` has evolved into a highly composable method, with flags to manipulate the resulting text in many directions that were not covered in this blog post. Here are a few helpful pages to guide you: - [How to parameterize `generate`](https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration) - [How to stream the output](https://huggingface.co/docs/transformers/generation_strategies#streaming) - [Full list of decoding options](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig) - [`generate` API reference](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.generate) - [LLM score leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) If you find that navigating our docs is challenging and you can't easily find what you're looking for, drop us a message in [this GitHub issue](https://github.com/huggingface/transformers/issues/24575). Your feedback is critical to set our future direction! 🤗
The Reformer - Pushing the limits of language modeling
patrickvonplaten
July 3, 2020
reformer
research, nlp
https://huggingface.co/blog/reformer
# The Reformer - Pushing the limits of language modeling <a href="https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## How the Reformer uses less than 8GB of RAM to train on sequences of half a million tokens The Reformer model as introduced by [Kitaev, Kaiser et al. (2020)](https://arxiv.org/pdf/2001.04451.pdf) is one of the most memory-efficient transformer models for long sequence modeling as of today. Recently, long sequence modeling has experienced a surge of interest as can be seen by the many submissions from this year alone - [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150), [Roy et al. (2020)](https://arxiv.org/abs/2003.05997), [Tay et al.](https://arxiv.org/abs/2002.11296), [Wang et al.](https://arxiv.org/abs/2006.04768) to name a few. The motivation behind long sequence modeling is that many tasks in NLP, *e.g.* summarization, question answering, require the model to process longer input sequences than models, such as BERT, are able to handle. In tasks that require the model to process a large input sequence, long sequence models do not have to cut the input sequence to avoid memory overflow and thus have been shown to outperform standard "BERT"-like models *cf.* [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150). The Reformer pushes the limit of longe sequence modeling by its ability to process up to half a million tokens at once as shown in this [demo](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb). As a comparison, a conventional `bert-base-uncased` model limits the input length to only 512 tokens. In Reformer, each part of the standard transformer architecture is re-engineered to optimize for minimal memory requirement without a significant drop in performance. The memory improvements can be attributed to **4** features which the Reformer authors introduced to the transformer world: 1. **Reformer Self-Attention Layer** - *How to efficiently implement self-attention without being restricted to a local context?* 2. **Chunked Feed Forward Layers** - *How to get a better time-memory trade-off for large feed forward layers?* 3. **Reversible Residual Layers** - *How to drastically reduce memory consumption in training by a smart residual architecture?* 4. **Axial Positional Encodings** - *How to make positional encodings usable for extremely large input sequences?* The goal of this blog post is to give the reader an **in-depth** understanding of each of the four Reformer features mentioned above. While the explanations are focussed on the Reformer, the reader should get a better intuition under which circumstances each of the four features can be effective for other transformer models as well. The four sections are only loosely connected, so they can very well be read individually. Reformer is part of the 🤗Transformers library. For all users of the Reformer, it is advised to go through this very detailed blog post to better understand how the model works and how to correctly set its configuration. All equations are accompanied by their equivalent name for the Reformer config, *e.g.* `config.<param_name>`, so that the reader can quickly relate to the official docs and configuration file. **Note**: *Axial Positional Encodings* are not explained in the official Reformer paper, but are extensively used in the official codebase. This blog post gives the first in-depth explanation of Axial Positional Encodings. ## 1. Reformer Self-Attention Layer Reformer uses two kinds of special self-attention layers: *local* self-attention layers and Locality Sensitive Hashing (*LSH*) self-attention layers. To better introduce these new self-attention layers, we will briefly recap conventional self-attention as introduced in [Vaswani et al. 2017](https://arxiv.org/abs/1706.03762). This blog post uses the same notation and coloring as the popular blog post [The illustrated transformer](http://jalammar.github.io/illustrated-transformer/), so the reader is strongly advised to read this blog first. **Important**: While Reformer was originally introduced for causal self-attention, it can very well be used for bi-directional self-attention as well. In this post, Reformer's self-attention is presented for *bidirectional* self-attention. ### Recap Global Self-Attention The core of every Transformer model is the **self-attention** layer. To recap the conventional self-attention layer, which we refer to here as the **global self-attention** layer, let us assume we apply a transformer layer on the embedding vector sequence \\(\mathbf{X} = \mathbf{x}_1, \ldots, \mathbf{x}_n\\) where each vector \\(\mathbf{x}_{i}\\) is of size `config.hidden_size`, *i.e.* \\(d_h\\). In short, a global self-attention layer projects \\(\mathbf{X}\\) to the query, key and value matrices \\(\mathbf{Q}, \mathbf{K}, \mathbf{V}\\) and computes the output \\(\mathbf{Z}\\) using the *softmax* operation as follows: \\(\mathbf{Z} = \text{SelfAttn}(\mathbf{X}) = \text{softmax}(\mathbf{Q}\mathbf{K}^T) \mathbf{V}\\) with \\(\mathbf{Z}\\) being of dimension \\(d_h \times n\\) (leaving out the key normalization factor and self-attention weights \\(\mathbf{W}^{O}\\) for simplicity). For more detail on the complete transformer operation, see [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/). Visually, we can illustrate this operation as follows for \\(n=16, d_h=3\\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/conventional_attention.png) Note that for all visualizations `batch_size` and `config.num_attention_heads` is assumed to be 1. Some vectors, *e.g.* \\(\mathbf{x_3}\\) and its corresponding output vector \\(\mathbf{z_3}\\) are marked so that *LSH self-attention* can later be better explained. The presented logic can effortlessly be extended for multi-head self-attention (`config.num_attention_{h}eads` > 1). The reader is advised to read [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/) as a reference for multi-head self-attention. Important to remember is that for each output vector \\(\mathbf{z}_{i}\\), the whole input sequence \\(\mathbf{X}\\) is processed. The tensor of the inner dot-product \\(\mathbf{Q}\mathbf{K}^T\\) has an asymptotic memory complexity of \\(\mathcal{O}(n^2)\\) which usually represents the memory bottleneck in a transformer model. This is also the reason why `bert-base-cased` has a `config.max_position_embedding_size` of only 512. ### Local Self-Attention **Local self-attention** is the obvious solution to reducing the \\(\mathcal{O}(n^2)\\) memory bottleneck, allowing us to model longer sequences with a reduced computational cost. In local self-attention the input \\( \mathbf{X} = \mathbf{X}_{1:n} = \mathbf{x}_{1}, \ldots, \mathbf{x}_{n} \\) is cut into \\(n_{c}\\) chunks: \\( \mathbf{X} = \left[\mathbf{X}_{1:l_{c}}, \ldots, \mathbf{X}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}\right] \\) each of length `config.local_chunk_length`, *i.e.* \\(l_{c}\\), and subsequently global self-attention is applied on each chunk separately. Let's take our input sequence for \\(n=16, d_h=3\\) again for visualization: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/input.png) Assuming \\(l_{c} = 4, n_{c} = 4\\), chunked attention can be illustrated as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_attention_1.png) As can be seen, the attention operation is applied for each chunk \\(\mathbf{X}_{1:4}, \mathbf{X}_{5:8}, \mathbf{X}_{9:12}, \mathbf{X}_{13:16}\\) individually. The first drawback of this architecture becomes obvious: Some input vectors have no access to their immediate context, *e.g.* \\(\mathbf{x}_9\\) has no access to \\(\mathbf{x}_{8}\\) and vice-versa in our example. This is problematic because these tokens are not able to learn word representations that take their immediate context into account. A simple remedy is to augment each chunk with `config.local_num_chunks_before`, *i.e.* \\(n_{p}\\), chunks and `config.local_num_chunks_after`, *i.e.* \\(n_{a}\\), so that every input vector has at least access to \\(n_{p}\\) previous input vectors and \\(n_{a}\\) following input vectors. This can also be understood as chunking with overlap whereas \\(n_{p}\\) and \\(n_{a}\\) define the amount of overlap each chunk has with all previous chunks and following chunks. We denote this extended local self-attention as follows: $$\mathbf{Z}^{\text{loc}} = \left[\mathbf{Z}_{1:l_{c}}^{\text{loc}}, \ldots, \mathbf{Z}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}^{\text{loc}}\right], $$ with $$\mathbf{Z}_{l_{c} * (i - 1) + 1 : l_{c} * i}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{l_{c} * (i - 1 - n_{p}) + 1: l_{c} * (i + n_{a})})\left[n_{p} * l_{c}: -n_{a} * l_{c}\right], \forall i \in \{1, \ldots, n_{c} \}$$ Okay, this formula looks quite complicated. Let's make it easier. In Reformer's self-attention layers \\(n_{a}\\) is usually set to 0 and \\(n_{p}\\) is set to 1, so let's write down the formula again for \\(i = 1\\): $$\mathbf{Z}_{1:l_{c}}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{-l_{c} + 1: l_{c}})\left[l_{c}:\right]$$ We notice that we have a circular relationship so that the first segment can attend the last segment as well. Let's illustrate this slightly enhanced local attention again. First, we apply self-attention within each windowed segment and keep only the central output segment. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_2.png) Finally, the relevant output is concatenated to \\(\mathbf{Z}^{\text{loc}}\\) and looks as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_3.png) Note that local self-attention is implemented efficiently way so that no output is computed and subsequently "thrown-out" as shown here for illustration purposes by the red cross. It's important to note here that extending the input vectors for each chunked self-attention function allows *each* single output vector \\( \mathbf{z}_{i} \\) of this self-attention function to learn better vector representations. E.g. each of the output vectors \\( \mathbf{z}_{5}^{\text{loc}}, \mathbf{z}_{6}^{\text{loc}}, \mathbf{z}_{7}^{\text{loc}}, \mathbf{z}_{8}^{\text{loc}} \\) can take into account all of the input vectors \\( \mathbf{X}_{1:8} \\) to learn better representations. The gain in memory consumption is quite obvious: The \\( \mathcal{O}(n^2) \\) memory complexity is broken down for each segment individually so that the total asymptotic memory consumption is reduced to \\( \mathcal{O}(n_{c} * l_{c}^2) = \mathcal{O}(n * l_{c}) \\). This enhanced local self-attention is better than the vanilla local self-attention architecture but still has a major drawback in that every input vector can only attend to a local context of predefined size. For NLP tasks that do not require the transformer model to learn long-range dependencies between the input vectors, which include arguably *e.g.* speech recognition, named entity recognition and causal language modeling of short sentences, this might not be a big issue. Many NLP tasks do require the model to learn long-range dependencies, so that local self-attention could lead to significant performance degradation, *e.g.* * *Question-answering*: the model has to learn the relationship between the question tokens and relevant answer tokens which will most likely not be in the same local range * *Multiple-Choice*: the model has to compare multiple answer token segments to each other which are usually separated by a significant length * *Summarization*: the model has to learn the relationship between a long sequence of context tokens and a shorter sequence of summary tokens, whereas the relevant relationships between context and summary can most likely not be captured by local self-attention * etc... Local self-attention on its own is most likely not sufficient for the transformer model to learn the relevant relationships of input vectors (tokens) to each other. Therefore, Reformer additionally employs an efficient self-attention layer that approximates global self-attention, called *LSH self-attention*. ### LSH Self-Attention Alright, now that we have understood how local self-attention works, we can take a stab at the probably most innovative piece of Reformer: **Locality sensitive hashing (LSH) Self-Attention**. The premise of LSH self-attention is to be more or less as efficient as local self-attention while approximating global self-attention. LSH self-attention relies on the LSH algorithm as presented in [Andoni et al (2015)](https://arxiv.org/abs/1509.02897), hence its name. The idea behind LSH self-attention is based on the insight that if \\(n\\) is large, the softmax applied on the \\(\mathbf{Q}\mathbf{K}^T\\) attention dot-product weights only very few value vectors with values significantly larger than 0 for each query vector. Let's explain this in more detail. Let \\(\mathbf{k}_{i} \in \mathbf{K} = \left[\mathbf{k}_1, \ldots, \mathbf{k}_n \right]^T\\) and \\(\mathbf{q}_{i} \in \mathbf{Q} = \left[\mathbf{q}_1, \ldots, \mathbf{q}_n\right]^T\\) be the key and query vectors. For each \\(\mathbf{q}_{i}\\), the computation \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{K}^T)\\) can be approximated by using only those key vectors of \\(\mathbf{k}_{j}\\) that have a high cosine similarity with \\(\mathbf{q}_{i}\\). This owes to the fact that the softmax function puts exponentially more weight on larger input values. So far so good, the next problem is to efficiently find the vectors that have a high cosine similarity with \\(\mathbf{q}_{i}\\) for all \\(i\\). First, the authors of Reformer notice that sharing the query and key projections: \\(\mathbf{Q} = \mathbf{K}\\) does not impact the performance of a transformer model \\({}^1\\). Now, instead of having to find the key vectors of high cosine similarity for each query vector \\(q_i\\), only the cosine similarity of query vectors to each other has to be found. This is important because there is a transitive property to the query-query vector dot product approximation: If \\(\mathbf{q}_{i}\\) has a high cosine similarity to the query vectors \\(\mathbf{q}_{j}\\) and \\(\mathbf{q}_{k}\\), then \\(\mathbf{q}_{j}\\) also has a high cosine similarity to \\(\mathbf{q}_{k}\\). Therefore, the query vectors can be clustered into buckets, such that all query vectors that belong to the same bucket have a high cosine similarity to each other. Let's define \\(C_{m}\\) as the *mth* set of position indices, such that their corresponding query vectors are in the same bucket: \\(C_{m} = \{ i | \text{ s.t. } \mathbf{q}_{i} \in \text{mth cluster}\}\\) and `config.num_buckets`, *i.e.* \\(n_{b}\\), as the number of buckets. For each set of indices \\(C_{m}\\), the softmax function on the corresponding bucket of query vectors \\(\text{softmax}(\mathbf{Q}_{i \in C_{m}} \mathbf{Q}^T_{i \in C_{m}})\\) approximates the softmax function of global self-attention with shared query and key projections \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{Q}^T)\\) for all position indices \\(i\\) in \\(C_{m}\\). Second, the authors make use of the **LSH** algorithm to cluster the query vectors into a predefined number of buckets \\(n_{b}\\). The LSH algorithm is an ideal choice here because it is very efficient and is an approximation of the nearest neighbor algorithm for cosine similarity. Explaining the LSH scheme is out-of-scope for this notebook, so let's just keep in mind that for each vector \\(\mathbf{q}_{i}\\) the LSH algorithm attributes its position index \\(i\\) to one of \\(n_{b}\\) predefined buckets, *i.e.* \\(\text{LSH}(\mathbf{q}_{i}) = m\\) with \\(i \in \{1, \ldots, n\}\\) and \\(m \in \{1, \ldots, n_{b}\}\\). Visually, we can illustrate this as follows for our original example: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_hashing.png) Third, it can be noted that having clustered all query vectors in \\(n_{b}\\) buckets, the corresponding set of indices \\(C_{m}\\) can be used to permute the input vectors \\(\mathbf{x}_1, \ldots, \mathbf{x}_n\\) accordingly \\({}^2\\) so that shared query-key self-attention can be applied piecewise similar to local attention. Let's clarify with our example input vectors \\(\mathbf{X} = \mathbf{x}_1, ..., \mathbf{x}_{16}\\) and assume `config.num_buckets=4` and `config.lsh_chunk_length = 4`. Looking at the graphic above we can see that we have assigned each query vector \\( \mathbf{q}_1, \ldots, \mathbf{q}_{16} \\) to one of the clusters \\( \mathcal{C}_{1}, \mathcal{C}_{2}, \mathcal{C}_{3}, \mathcal{C}_{4} \\) . If we now sort the corresponding input vectors \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) accordingly, we get the following permuted input \\( \mathbf{X'} \\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_perm.png) The self-attention mechanism should be applied for each cluster individually so that for each cluster \\( \mathcal{C}_m \\) the corresponding output is calculated as follows: \\( \mathbf{Z}^{\text{LSH}}_{i \in \mathcal{C}_m} = \text{SelfAttn}_{\mathbf{Q}=\mathbf{K}}(\mathbf{X}_{i \in \mathcal{C}_m}) \\). Let's illustrate this again for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_cluster_attn.png) As can be seen, the self-attention function operates on different sizes of matrices, which is suboptimal for efficient batching in GPU and TPU. To overcome this problem, the permuted input can be chunked the same way it is done for local attention so that each chunk is of size `config.lsh_chunk_length`. By chunking the permuted input, a bucket might be split into two different chunks. To remedy this problem, in LSH self-attention each chunk attends to its previous chunk `config.lsh_num_chunks_before=1` in addition to itself, the same way local self-attention does (`config.lsh_num_chunks_after` is usually set to 0). This way, we can be assured that all vectors in a bucket attend to each other with a high probability \\({}^3\\). All in all for all chunks \\( k \in \{1, \ldots, n_{c}\} \\), LSH self-attention can be noted down as follows: $$ \mathbf{Z'}_{l_{c} * k + 1:l_{c} * (k + 1)}^{\text{LSH}} = \text{SelfAttn}_{\mathbf{Q} = \mathbf{K}}(\mathbf{X'}_{l_{c} * k + 1): l_{c} * (k + 1)})\left[l_{c}:\right] $$ with \\(\mathbf{X'}\\) and \\( \mathbf{Z'} \\) being the input and output vectors permuted according to the LSH algorithm. Enough complicated formulas, let's illustrate LSH self-attention. The permuted vectors \\(\mathbf{X'}\\) as shown above are chunked and shared query key self-attention is applied to each chunk. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_2.png) Finally, the output \\(\mathbf{Z'}^{\text{LSH}}\\) is reordered to its original permutation. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_3.png) One important feature to mention here as well is that the accuracy of LSH self-attention can be improved by running LSH self-attention `config.num_hashes`, e.g. \\(n_{h} \\) times in parallel, each with a different random LSH hash. By setting `config.num_hashes > 1`, for each output position \\( i \\), multiple output vectors \\( \mathbf{z}^{\text{LSH}, 1}_{i}, \ldots, \mathbf{z}^{\text{LSH}, n_{h}}_{i} \\) are computed and subsequently merged: \\( \mathbf{z}^{\text{LSH}}_{i} = \sum_k^{n_{h}} \mathbf{Z}^{\text{LSH}, k}_{i} * \text{weight}^k_i \\). The \\( \text{weight}^k_i \\) represents the importance of the output vectors \\( \mathbf{z}^{\text{LSH}, k}_{i} \\) of hashing round \\( k \\) in comparison to the other hashing rounds, and is exponentially proportional to the normalization term of their softmax computation. The intuition behind this is that if the corresponding query vector \\( \mathbf{q}_{i}^{k} \\) have a high cosine similarity with all other query vectors in its respective chunk, then the softmax normalization term of this chunk tends to be high, so that the corresponding output vectors \\( \mathbf{q}_{i}^{k} \\) should be a better approximation to global attention and thus receive more weight than output vectors of hashing rounds with a lower softmax normalization term. For more detail see Appendix A of the [paper](https://arxiv.org/pdf/2001.04451.pdf). For our example, multi-round LSH self-attention can be illustrated as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_4.png) Great. That's it. Now we know how LSH self-attention works in Reformer. Regarding the memory complexity, we now have two terms that compete which each other to be the memory bottleneck: the dot-product: \\( \mathcal{O}(n_{h} * n_{c} * l_{c}^2) = \mathcal{O}(n * n_{h} * l_{c}) \\) and the required memory for LSH bucketing: \\( \mathcal{O}(n * n_{h} * \frac{n_{b}}{2}) \\) with \\( l_{c} \\) being the chunk length. Because for large \\( n \\), the number of buckets \\( \frac{n_{b}}{2} \\) grows much faster than the chunk length \\( l_{c} \\), the user can again factorize the number of buckets `config.num_buckets` as explained [here](https://huggingface.co/transformers/model_doc/reformer.html#lsh-self-attention). Let's recap quickly what we have gone through above: 1. We want to approximate global attention using the knowledge that the softmax operation only puts significant weights on very few key vectors. 2. If key vectors are equal to query vectors this means that *for each* query vector \\( \mathbf{q}_{i} \\), the softmax only puts significant weight on other query vectors that are similar in terms of cosine similarity. 3. This relationship works in both ways, meaning if \\( \mathbf{q}_{j} \\) is similar to \\( \mathbf{q}_{i} \\) than \\(\mathbf{q}_{j} \\) is also similar to \\( \mathbf{q}_{i} \\), so that we can do a global clustering before applying self-attention on a permuted input. 4. We apply local self-attention on the permuted input and re-order the output to its original permutation. --- \\( {}^{1} \\) The authors run some preliminary experiments confirming that shared query key self-attention performs more or less as well as standard self-attention. \\( {}^{2} \\) To be more exact the query vectors within a bucket are sorted according to their original order. This means if, *e.g.* the vectors \\( \mathbf{q}_1, \mathbf{q}_3, \mathbf{q}_7 \\) are all hashed to bucket 2, the order of the vectors in bucket 2 would still be \\( \mathbf{q}_1 \\), followed by \\( \mathbf{q}_3 \\) and \\( \mathbf{q}_7 \\). \\( {}^3 \\) On a side note, it is to mention the authors put a mask on the query vector \\( \mathbf{q}_{i} \\) to prevent the vector from attending to itself. Because the cosine similarity of a vector to itself will always be as high or higher than the cosine similarity to other vectors, the query vectors in shared query key self-attention are strongly discouraged to attend to themselves. ### Benchmark Benchmark tools were recently added to Transformers - see [here](https://github.com/huggingface/transformers/blob/master/notebooks/05-benchmark.ipynb) for a more detailed explanation. To show how much memory can be saved using "local" + "LSH" self-attention, the Reformer model `google/reformer-enwik8` is benchmarked for different `local_attn_chunk_length` and `lsh_attn_chunk_length`. The default configuration and usage of the `google/reformer-enwik8` model can be checked in more detail [here](https://huggingface.co/google/reformer-enwik8). Let's first do some necessary imports and installs. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` First, let's benchmark the memory usage of the Reformer model using *global* self-attention. This can be achieved by setting `lsh_attn_chunk_length` = `local_attn_chunk_length` = 8192 so that for all input sequences smaller or equal to 8192, the model automatically switches to global self-attention. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8", lsh_attn_chunk_length=16386, local_attn_chunk_length=16386, lsh_num_chunks_before=0, local_num_chunks_before=0) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16386], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1279.0, style=ProgressStyle(description… 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 8.87 GiB already allocated; 1.92 GiB free; 8.88 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1465 Reformer 1 4096 2757 Reformer 1 8192 7893 Reformer 1 16386 N/A -------------------------------------------------------------------------------- The longer the input sequence, the more visible is the quadratic relationship \\( \mathcal{O}(n^2) \\) between input sequence and peak memory usage. As can be seen, in practice it would require a much longer input sequence to clearly observe that doubling the input sequence quadruples the peak memory usage. For this a `google/reformer-enwik8` model using global attention, a sequence length of over 16K results in a memory overflow. Now, let's activate *local* and *LSH* self-attention by using the model's default parameters. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8") benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16384, 32768, 65436], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) Doesn't fit on GPU. CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 11.17 GiB total capacity; 6.56 GiB already allocated; 3.99 GiB free; 6.81 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1785 Reformer 1 4096 2621 Reformer 1 8192 4281 Reformer 1 16384 7607 Reformer 1 32768 N/A Reformer 1 65436 N/A -------------------------------------------------------------------------------- As expected using local and LSH self-attention is much more memory efficient for longer input sequences, so that the model runs out of memory only at 16K tokens for a 11GB RAM GPU in this notebook. ## 2. Chunked Feed Forward Layers Transformer-based models often employ very large feed forward layers after the self-attention layer in parallel. Thereby, this layer can take up a significant amount of the overall memory and sometimes even represent the memory bottleneck of a model. First introduced in the Reformer paper, feed forward chunking is a technique that allows to effectively trade better memory consumption for increased time consumption. ### Chunked Feed Forward Layer in Reformer In Reformer, the _LSH_- or _local_ self-attention layer is usually followed by a residual connection, which then defines the first part in a *transformer block*. For more detail on this please refer to this [blog](http://jalammar.github.io/illustrated-transformer/). The output of the first part of the *transformer block*, called *normed self-attention* output can be written as \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\), with \\( \mathbf{Z} \\) being either \\( \mathbf{Z}^{\text{LSH}} \\) or \\( \mathbf{Z}^\text{loc} \\) in Reformer. For our example input \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\), we illustrate the normed self-attention output as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/layer_normed_output.png) Now, the second part of a *transformer block* usually consists of two feed forward layers \\( ^{1} \\), defined as \\( \text{Linear}_{\text{int}}(\ldots) \\) that processes \\( \mathbf{\overline{Z}} \\), to an intermediate output \\( \mathbf{Y}_{\text{int}} \\) and \\( \text{Linear}_{\text{out}}(\ldots) \\) that processes the intermediate output to the output \\( \mathbf{Y}_{\text{out}} \\). The two feed forward layers can be defined by $$\mathbf{Y}_{\text{out}} = \text{Linear}_{\text{out}}(\mathbf{Y}_\text{int}) = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}})).$$ It is important to remember at this point that mathematically the output of a feed forward layer at position \\( \mathbf{y}_{\text{out}, i} \\) only depends on the input at this position \\( \mathbf{\overline{y}}_{i} \\). In contrast to the self-attention layer, every output \\( \mathbf{y}_{\text{out}, i} \\) is therefore completely independent of all inputs \\( \mathbf{\overline{y}}_{j \ne i} \\) of different positions. Let's illustrate the feed forward layers for \\( \mathbf{\overline{z}}_1, \ldots, \mathbf{\overline{z}}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward.png) As can be depicted from the illustration, all input vectors \\( \mathbf{\overline{z}}_{i} \\) are processed by the same feed forward layer in parallel. It becomes interesting when one takes a look at the output dimensions of the feed forward layers. In Reformer, the output dimension of \\( \text{Linear}_{\text{int}} \\) is defined as `config.feed_forward_size`, *e.g.* \\( d_{f} \\), and the output dimension of \\( \text{Linear}_{\text{out}} \\) is defined as `config.hidden_size`, *i.e.* \\( d_{h} \\). The Reformer authors observed that in a transformer model the intermediate dimension \\( d_{f} \\) usually tends to be much larger than the output dimension \\(^{2}\\) \\( d_{h} \\). This means that the tensor \\( \mathbf{\mathbf{Y}}_\text{int} \\) of dimension \\( d_{f} \times n \\) allocates a significant amount of the total memory and can even become the memory bottleneck. To get a better feeling for the differences in dimensions let's picture the matrices \\( \mathbf{Y}_\text{int} \\) and \\( \mathbf{Y}_\text{out} \\) for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward_matrix.png) It is becoming quite obvious that the tensor \\( \mathbf{Y}_\text{int} \\) holds much more memory ( \\( \frac{d_{f}}{d_{h}} \times n \\) as much to be exact) than \\( \mathbf{Y}_{\text{out}} \\). But, is it even necessary to compute the full intermediate matrix \\( \mathbf{Y}_\text{int} \\) ? Not really, because relevant is only the output matrix \\( \mathbf{Y}_\text{out} \\). To trade memory for speed, one can thus chunk the linear layers computation to only process one chunk at the time. Defining `config.chunk_size_feed_forward` as \\( c_{f} \\), chunked linear layers are defined as \\( \mathbf{Y}_{\text{out}} = \left[\mathbf{Y}_{\text{out}, 1: c_{f}}, \ldots, \mathbf{Y}_{\text{out}, (n - c_{f}): n}\right] \\) with \\( \mathbf{Y}_{\text{out}, (c_{f} * i): (i * c_{f} + i)} = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}}_{(c_{f} * i): (i * c_{f} + i)})) \\). In practice, it just means that the output is incrementally computed and concatenated to avoid having to store the whole intermediate tensor \\( \mathbf{Y}_{\text{int}} \\) in memory. Assuming \\( c_{f}=1 \\) for our example we can illustrate the incremental computation of the output for position \\( i=9 \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_feed_forward.png) By processing the inputs in chunks of size 1, the only tensors that have to be stored in memory at the same time are \\( \mathbf{Y}_\text{out} \\) of a maximum size of \\( 16 \times d_{h} \\), \\( \mathbf{y}_{\text{int}, i} \\) of size \\( d_{f} \\) and the input \\( \mathbf{\overline{Z}} \\) of size \\( 16 \times d_{h} \\), with \\( d_{h} \\) being `config.hidden_size` \\(^{3}\\). Finally, it is important to remember that *chunked linear layers* yield a mathematically equivalent output to conventional linear layers and can therefore be applied to all transformer linear layers. Making use of `config.chunk_size_feed_forward` therefore allows a better trade-off between memory and speed in certain use cases. --- \\( {}^1 \\) For a simpler explanation, the layer norm layer which is normally applied to \\( \mathbf{\overline{Z}} \\) before being processed by the feed forward layers is omitted for now. \\( {}^2 \\) In `bert-base-uncased`, *e.g.* the intermediate dimension \\( d_{f} \\) is with 3072 four times larger than the output dimension \\( d_{h} \\). \\( {}^3 \\) As a reminder, the output `config.num_attention_heads` is assumed to be 1 for the sake of clarity and illustration in this notebook, so that the output of the self-attention layers can be assumed to be of size `config.hidden_size`. More information on chunked linear / feed forward layers can also be found [here](https://huggingface.co/transformers/glossary.html#feed-forward-chunking) on the 🤗Transformers docs. ### Benchmark Let's test how much memory can be saved by using chunked feed forward layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Building wheel for transformers (setup.py) ... [?25l[?25hdone First, let's compare the default `google/reformer-enwik8` model without chunked feed forward layers to the one with chunked feed forward layers. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8") # no chunk config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) 2 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.24 GiB free; 9.56 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 4281 Reformer-No-Chunk 8 2048 7607 Reformer-No-Chunk 8 4096 N/A Reformer-Chunk 8 1024 4309 Reformer-Chunk 8 2048 7669 Reformer-Chunk 8 4096 N/A -------------------------------------------------------------------------------- Interesting, chunked feed forward layers do not seem to help here at all. The reason is that `config.feed_forward_size` is not sufficiently large to make a real difference. Only at longer sequence lengths of 4096, a slight decrease in memory usage can be seen. Let's see what happens to the memory peak usage if we increase the size of the feed forward layer by a factor of 4 and reduce the number of attention heads also by a factor of 4 so that the feed forward layer becomes the memory bottleneck. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=0, num_attention_{h}eads=2, feed_forward_size=16384) # no chuck config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1, num_attention_{h}eads=2, feed_forward_size=16384) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 3743 Reformer-No-Chunk 8 2048 5539 Reformer-No-Chunk 8 4096 9087 Reformer-Chunk 8 1024 2973 Reformer-Chunk 8 2048 3999 Reformer-Chunk 8 4096 6011 -------------------------------------------------------------------------------- Now a clear decrease in peak memory usage can be seen for longer input sequences. As a conclusion, it should be noted chunked feed forward layers only makes sense for models having few attention heads and large feed forward layers. ## 3. Reversible Residual Layers Reversible residual layers were first introduced in [N. Gomez et al](https://arxiv.org/abs/1707.04585) and used to reduce memory consumption when training the popular *ResNet* model. Mathematically, reversible residual layers are slightly different to "real" residual layers but do not require the activations to be saved during the forward pass, which can drastically reduce memory consumption for training. ### Reversible Residual Layers in Reformer Let's start by investigating why training a model requires much more memory than the inference of the model. When running a model in inference, the required memory equals more or less the memory it takes to compute the **single** largest tensor in the model. On the other hand, when training a model, the required memory equals more or less the **sum** of all differentiable tensors. This is not surprising when considering how auto differentiation works in deep learning frameworks. These lecture [slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf) by Roger Grosse of the University of Toronto are great to better understand auto differentiation. In a nutshell, in order to calculate the gradient of a differentiable function (*e.g.* a layer), auto differentiation requires the gradient of the function's output and the function's input and output tensor. While the gradients are dynamically computed and subsequently discarded, the input and output tensors (*a.k.a* activations) of a function are stored during the forward pass. Alright, let's apply this to a transformer model. A transformer model includes a stack of multiple so-called transformer layers. Each additional transformer layer forces the model to store more activations during the forward pass and thus increases the required memory for training. Let's take a more detailed look. A transformer layer essentially consists of two residual layers. The first residual layer represents the *self-attention* mechanism as explained in section 1) and the second residual layer represents the *linear* or feed-forward layers as explained in section 2). Using the same notation as before, the input of a transformer layer *i.e.* \\( \mathbf{X} \\) is first normalized \\( ^{1} \\) and subsequently processed by the self-attention layer to get the output \\( \mathbf{Z} = \text{SelfAttn}(\text{LayerNorm}(\mathbf{X})) \\). We will abbreviate these two layers with \\( G \\) so that \\( \mathbf{Z} = G(\mathbf{X}) \\). Next, the residual \\( \mathbf{Z} \\) is added to the input \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\) and the sum is fed into the second residual layer - the two linear layers. \\( \mathbf{\overline{Z}} \\) is processed by a second normalization layer, followed by the two linear layers to get \\( \mathbf{Y} = \text{Linear}(\text{LayerNorm}(\mathbf{Z} + \mathbf{X})) \\). We will abbreviate the second normalization layer and the two linear layers with \\( F \\) yielding \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\). Finally, the residual \\( \mathbf{Y} \\) is added to \\( \mathbf{\overline{Z}} \\) to give the output of the transformer layer \\( \mathbf{\overline{Y}} = \mathbf{Y} + \mathbf{\overline{Z}} \\). Let's illustrate a complete transformer layer using the example of \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/normal_trans_resnet.png) To calculate the gradient of *e.g.* the self-attention block \\( G \\), three tensors have to be known beforehand: the gradient \\( \partial \mathbf{Z} \\), the output \\( \mathbf{Z} \\), and the input \\( \mathbf{X} \\). While \\( \partial \mathbf{Z} \\) can be calculated on-the-fly and discarded afterward, the values for \\( \mathbf{Z} \\) and \\( \mathbf{X} \\) have to be calculated and stored during the forward pass since it is not possible to recalculate them easily on-the-fly during backpropagation. Therefore, during the forward pass, large tensor outputs, such as the query-key dot product matrix \\( \mathbf{Q}\mathbf{K}^T \\) or the intermediate output of the linear layers \\( \mathbf{Y}^{\text{int}} \\), have to be stored in memory \\( ^{2} \\). Here, reversible residual layers come to our help. The idea is relatively straight-forward. The residual block is designed in a way so that instead of having to store the input and output tensor of a function, both can easily be recalculated during the backward pass so that no tensor has to be stored in memory during the forward pass. This is achieved by using two input streams \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\), and two output streams \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\). The first residual \\( \mathbf{Z} \\) is computed by the first output stream \\( \mathbf{Z} = G(\mathbf{X}^{(1)}) \\) and subsequently added to the input of the second input stream, so that \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X}^{(2)} \\). Similarly, the residual \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\) is added to the first input stream again, so that the two output streams are defined by \\( \mathbf{Y}^{(1)} = \mathbf{Y} + \mathbf{X}^{(1)} \\) and \\( \mathbf{Y}^{(2)} = \mathbf{X}^{(2)} + \mathbf{Z} = \mathbf{\overline{Z}} \\). The reversible transformer layer can be visualized for \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/rev_trans_resnet.png) As can be seen, the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) are calculated in a very similar way than \\( \mathbf{\overline{Y}} \\) of the non-reversible layer, but they are mathematically different. The authors of Reformer observe in some initial experiments that the performance of a reversible transformer model matches the performance of a standard transformer model. The first visible difference to the standard transformer layer is that there are two input streams and output streams \\( ^{3} \\), which at first slightly increases the required memory for both the forward pass. The two-stream architecture is crucial though for not having to save any activations during the forward pass. Let's explain. For backpropagation, the reversible transformer layer has to calculate the gradients \\( \partial G \\) and \\( \partial F \\). In addition to the gradients \\( \partial \mathbf{Y} \\) and \\( \partial \mathbf{Z} \\) which can be calculated on-the-fly, the tensor values \\( \mathbf{Y} \\), \\( \mathbf{\overline{Z}} \\) have to be known for \\( \partial F \\) and the tensor values \\( \mathbf{Z} \\) and \\( \mathbf{X}^{(1)} \\) for \\( \partial G \\) to make auto-differentiation work. If we assume to know \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\), it can easily be depicted from the graph that one can calculate \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\) as follows. \\( \mathbf{X}^{(1)} = F(\mathbf{\overline{Y}}^{(1)}) - \mathbf{\overline{Y}}^{(1)} \\). Great, now that \\( \mathbf{X}^{(1)} \\) is known, \\( \mathbf{X}^{(2)} \\) can be computed by \\( \mathbf{X}^{(2)} = \mathbf{\overline{Y}}^{(1)} - G(\mathbf{X}^{(1)}) \\). Alright now, \\( \mathbf{Z} \\) and \\( \mathbf{Y} \\) are trivial to compute via \\( \mathbf{Y} = \mathbf{\overline{Y}}^{(1)} - \mathbf{X}^{(1)} \\) and \\( \mathbf{Z} = \mathbf{\overline{Y}}^{(2)} - \mathbf{X}^{(2)} \\). So as a conclusion, if only the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) of the **last** reversible transformer layer are stored during the forward pass, all other relevant activations can be derived by making use of \\( G \\) and \\( F \\) during the backward pass and passing \\( \mathbf{X}^{(1)} \\) and \\( \mathbf{X}^{(2)} \\). The overhead of two forward passes of \\( G \\) and \\( F \\) per reversible transformer layer during the backpropagation is traded against not having to store any activations during the forward pass. Not a bad deal! **Note**: Since recently, major deep learning frameworks have released code that allows to store only certain activations and recompute larger ones during the backward propagation (Tensoflow [here](https://www.tensorflow.org/api_docs/python/tf/recompute_grad) and PyTorch [here](https://pytorch.org/docs/stable/checkpoint.html)). For standard reversible layers, this still means that at least one activation has to be stored for each transformer layer, but by defining which activations can dynamically be recomputed a lot of memory can be saved. --- \\( ^{1} \\) In the previous two sections, we have omitted the layer norm layers preceding both the self-attention layer and the linear layers. The reader should know that both \\( \mathbf{X} \\) and \\( \mathbf{\overline{Z}} \\) are both processed by layer normalization before being fed into self-attention and the linear layers respectively. \\( ^{2} \\) While in the design the dimension of \\( \mathbf{Q}\mathbf{K} \\) is written as \\( n \times n \\), in a *LSH self-attention* or *local self-attention* layer the dimension would only be \\( n \times l_{c} \times n_{h} \\) or \\( n \times l_{c} \\) respectively with \\( l_{c} \\) being the chunk length and \\( n_{h} \\) the number of hashes \\( ^{3} \\) In the first reversible transformer layer \\( \mathbf{X}^{(2)} \\) is set to be equal to \\( \mathbf{X}^{(1)} \\). ### Benchmark In order to measure the effect of reversible residual layers, we will compare the memory consumption of BERT with Reformer in training for an increasing number of layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, BertConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Let's measure the required memory for the standard `bert-base-uncased` BERT model by increasing the number of layers from 4 to 12. ``` config_4_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=4) config_8_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=8) config_12_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=12) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Bert-4-Layers", "Bert-8-Layers", "Bert-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_bert, config_8_layers_bert, config_12_layers_bert], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Bert-4-Layers 8 512 4103 Bert-8-Layers 8 512 5759 Bert-12-Layers 8 512 7415 -------------------------------------------------------------------------------- It can be seen that adding a single layer of BERT linearly increases the required memory by more than 400MB. ``` config_4_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=4, num_hashes=1) config_8_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=8, num_hashes=1) config_12_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=12, num_hashes=1) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-4-Layers", "Reformer-8-Layers", "Reformer-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_reformer, config_8_layers_reformer, config_12_layers_reformer], args=benchmark_args) result = benchmark.run() ``` 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-4-Layers 8 512 4607 Reformer-8-Layers 8 512 4987 Reformer-12-Layers 8 512 5367 -------------------------------------------------------------------------------- For Reformer, on the other hand, adding a layer adds significantly less memory in practice. Adding a single layer increases the required memory on average by less than 100MB so that a much larger 12-Layer `reformer-enwik8` model requires less memory than a 12-Layer `bert-base-uncased` model. ## 4. Axial Positional Encodings Reformer makes it possible to process huge input sequences. However, for such long input sequences standard positional encoding weight matrices alone would use more than 1GB to store its weights. To prevent such large positional encoding matrices, the official Reformer code introduced *Axial Position Encodings*. **Important:** *Axial Position Encodings were not explained in the official paper, but can be well understood from looking into the code and talking to the authors* ### Axial Positional Encodings in Reformer Transformers need positional encodings to account for the order of words in the input because self-attention layers have *no notion of order*. Positional encodings are usually defined by a simple look-up matrix \\( \mathbf{E} = \left[\mathbf{e}_1, \ldots, \mathbf{e}_{n_\text{max}}\right] \\) The positional encoding vector \\( \mathbf{e}_{i} \\) is then simply added to the *ith* input vector \\( \mathbf{x}_{i} + \mathbf{e}_{i} \\) so that the model can distinguish if an input vector (*a.k.a* token) is at position \\( i \\) or \\( j \\). For every input position, the model needs to be able to look up the corresponding positional encoding vector so that the dimension of \\( \mathbf{E} \\) is defined by the maximum length of input vectors the model can process `config.max_position_embeddings`, *i.e.* \\( n_\text{max} \\), and the `config.hidden_size`, *i.e.* \\( d_{h} \\) of the input vectors. Assuming \\( d_{h}=4 \\) and \\( n_\text{max}=49 \\), such a positional encoding matrix can be visualized as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/positional_encodings_default.png) Here, we showcase only the positional encodings \\( \mathbf{e}_{1} \\), \\( \mathbf{e}_{2} \\), and \\( \mathbf{e}_{49} \\) each of dimension, *a.k.a* height 4. Let's imagine, we want to train a Reformer model on sequences of a length of up to 0.5M tokens and an input vector `config.hidden_size` of 1024 (see notebook [here](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb)). The corresponding positional embeddings have a size of \\( 0.5M \times 1024 \sim 512M \\) parameters, which corresponds to a size of 2GB. Such positional encodings would use an unnecessarily large amount of memory both when loading the model in memory and when saving the model on a hard drive. The Reformer authors managed to drastically shrink the positional encodings in size by cutting the `config.hidden_size` dimension in two and smartly factorizing the \\( n_\text{max} \\) dimension. In Transformer, the user can decide into which shape \\( n_\text{max} \\) can be factorized into by setting `config.axial_pos_shape` to an appropriate list of two values \\( n_\text{max}^1 \\) and \\( n_\text{max}^2 \\) so that \\( n_\text{max}^1 \times n_\text{max}^2 = n_\text{max} \\). By setting `config.axial_pos_embds_dim` to an appropriate list of two values \\( d_{h}^{1} \\) and \\( d_{h}^2 \\) so that \\( d_{h}^1 + d_{h}^2 = d_{h} \\), the user can decide how the hidden size dimension should be cut. Now, let's visualize and explain more intuitively. One can think of factorizing \\( n_{\text{max}} \\) as folding the dimension into a third axis, which is shown in the following for the factorization `config.axial_pos_shape = [7, 7]`: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding.png) Each of the three standing rectangular prisms corresponds to one of the encoding vectors \\( \mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{49} \\), but we can see that the 49 encoding vectors are divided into 7 rows of 7 vectors each. Now the idea is to use only one row of 7 encoding vectors and expand those vectors to the other 6 rows, essentially reusing their values. Because it is discouraged to have the same values for different encoding vectors, each vector of dimension (*a.k.a* height) `config.hidden_size=4` is cut into the lower encoding vector \\( \mathbf{e}_\text{down} \\) of size \\( 1 \\) and \\( \mathbf{e}_\text{up} \\) of size \\( 3 \\), so that the lower part can be expanded along the row dimension and the upper part can be expanded along the column dimension. Let's visualize for more clarity. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding_cut.png) We can see that we have cut the embedding vectors into \\( \mathbf{e}_\text{down} \\) (*in blue*) and \\( \mathbf{e}_\text{up} \\) (*in yellow*). Now for the "sub"-vectors \\( \mathbf{E}_\text{down} = \left[\mathbf{e}_{\text{down},1}, \ldots, \mathbf{e}_{\text{down},49}\right] \\) only the first row, *a.k.a.* the width in the graphic, of \\( 7 \\) is kept and expanded along the column dimension, *a.k.a.* the depth of the graphic. Inversely, for the "sub"-vectors \\( \mathbf{E}_\text{up} = \left[\mathbf{e}_{\text{up},1}, \ldots, \mathbf{e}_{\text{up},49}\right] \\) only the first column of \\( 7 \\) is kept and expanded along the row dimension. The resulting embedding vectors \\( \mathbf{e'}_{i} \\) then correspond to $$\mathbf{e'}_{i} = \left[ \left[\mathbf{e}_{\text{down, } i \% n_\text{max}^1}\right]^T, \left[\mathbf{e}_{\text{up, } \left \lfloor{\frac{i}{{n}^2_{\text{max}}}}\right \rfloor} \right]^T \right]^T $$ whereas \\( n_\text{max}^1 = 7 \\) and \\( n_\text{max}^2 = 7 \\) in our example. These new encodings \\( \mathbf{E'} = \left[\mathbf{e'}_{1}, \ldots, \mathbf{e'}_{n_\text{max}}\right] \\) are called **Axial Position Encodings**. In the following, these axial position encodings are illustrated in more detail for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/axial_pos_encoding.png) Now it should be more understandable how the final positional encoding vectors \\( \mathbf{E'} \\) are calculated only from \\( \mathbf{E}_{\text{down}} \\) of dimension \\( d_{h}^1 \times n_{\text{max}^1} \\) and \\( \mathbf{E}_{\text{up}} \\) of dimension \\( d_{h}^2 \times n_{\text{max}}^2 \\). The crucial aspect to see here is that Axial Positional Encodings make sure that none of the vectors \\( \left[\mathbf{e'}_1, \ldots, \mathbf{e'}_{n_{\text{max}}}\right] \\) are equal to each other by design and that the overall size of the encoding matrix is reduced from \\( n_{\text{max}} \times d_{h} \\) to \\( n_{\text{max}}^1 \times d_{h}^1 + n_\text{max}^2 \times d_{h}^2 \\). By allowing each axial positional encoding vector to be different by design the model is given much more flexibility to learn efficient positional representations if axial positional encodings are learned by the model. To demonstrate the drastic reduction in size, let's assume we would have set `config.axial_pos_shape = [1024, 512]` and `config.axial_pos_embds_dim = [512, 512]` for a Reformer model that can process inputs up to a length of 0.5M tokens. The resulting axial positional encoding matrix would have had a size of only \\( 1024 \times 512 + 512 \times 512 \sim 800K \\) parameters which corresponds to roughly 3MB. This is a drastic reduction from the 2GB a standard positional encoding matrix would require in this case. For a more condensed and math-heavy explanation please refer to the 🤗Transformers docs [here](https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings). ### Benchmark Lastly, let's also compare the peak memory consumption of conventional positional embeddings to *axial positional embeddings*. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments, ReformerModel ``` Positional embeddings depend only on two configuration parameters: The maximum allowed length of input sequences `config.max_position_embeddings` and `config.hidden_size`. Let's use a model that pushes the maximum allowed length of input sequences to half a million tokens, called `google/reformer-crime-and-punishment`, to see the effect of using axial positional embeddings. To begin with, we will compare the shape of axial position encodings with standard positional encodings and the number of parameters in the model. ``` config_no_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=False) # disable axial positional embeddings config_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=True, axial_pos_embds_dim=(64, 192), axial_pos_shape=(512, 1024)) # enable axial positional embeddings print("Default Positional Encodings") print(20 * '-') model = ReformerModel(config_no_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') print("Axial Positional Encodings") print(20 * '-') model = ReformerModel(config_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1151.0, style=ProgressStyle(description… Default Positional Encodings -------------------- Positional embeddings shape: PositionEmbeddings( (embedding): Embedding(524288, 256) ) Num parameters of model: 136572416 -------------------- Axial Positional Encodings -------------------- Positional embeddings shape: AxialPositionEmbeddings( (weights): ParameterList( (0): Parameter containing: [torch.FloatTensor of size 512x1x64] (1): Parameter containing: [torch.FloatTensor of size 1x1024x192] ) ) Num parameters of model: 2584064 -------------------- Having read the theory, the shape of the axial positional encoding weights should not be a surprise to the reader. Regarding the results, it can be seen that for models being capable of processing such long input sequences, it is not practical to use default positional encodings. In the case of `google/reformer-crime-and-punishment`, standard positional encodings alone contain more than 100M parameters. Axial positional encodings reduce this number to just over 200K. Lastly, let's also compare the required memory at inference time. ``` benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-No-Axial-Pos-Embeddings", "Reformer-Axial-Pos-Embeddings"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_pos_axial_embeds, config_pos_axial_embeds], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Axial-Pos-Embeddin 8 512 959 Reformer-Axial-Pos-Embeddings 8 512 447 -------------------------------------------------------------------------------- It can be seen that using axial positional embeddings reduces the memory requirement to approximately half in the case of `google/reformer-crime-and-punishment`.
Block Sparse Matrices for Smaller and Faster Language Models
madlag
Sep 10, 2020
pytorch_block_sparse
research, nlp
https://huggingface.co/blog/pytorch_block_sparse
# Block Sparse Matrices for Smaller and Faster Language Models ## Saving space and time, one zero at a time In previous [blog](https://medium.com/huggingface/is-the-future-of-neural-networks-sparse-an-introduction-1-n-d03923ecbd70) [posts](https://medium.com/huggingface/sparse-neural-networks-2-n-gpu-performance-b8bc9ce950fc) we introduced sparse matrices and what they could do to improve neural networks. The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision. In some cases sparse linear layers can even *improve precision or/and generalization*. The main issue is that currently available code that supports sparse algebra computation is severely lacking efficiency. We are also [still waiting](https://openai.com/blog/openai-pytorch/) for official PyTorch support. That's why we ran out of patience and took some time this summer to address this "lacuna". Today, we are excited to **release the extension [pytorch_block_sparse](https://github.com/huggingface/pytorch_block_sparse)**. By itself, or even better combined with other methods like [distillation](https://medium.com/huggingface/distilbert-8cf3380435b5) and [quantization](https://medium.com/microsoftazure/faster-and-smaller-quantized-nlp-with-hugging-face-and-onnx-runtime-ec5525473bb7), this library enables **networks** which are both **smaller and faster**, something Hugging Face considers crucial to let anybody use neural networks in production at **low cost**, and to **improve the experience** for the end user. ## Usage The provided `BlockSparseLinear` module is a drop in replacement for `torch.nn.Linear`, and it is trivial to use it in your models: ```python # from torch.nn import Linear from pytorch_block_sparse import BlockSparseLinear ... # self.fc = nn.Linear(1024, 256) self.fc = BlockSparseLinear(1024, 256, density=0.1) ``` The extension also provides a `BlockSparseModelPatcher` that allows to modify an existing model "on the fly", which is shown in this [example notebook](https://github.com/huggingface/pytorch_block_sparse/blob/master/doc/notebooks/ModelSparsification.ipynb). Such a model can then be trained as usual, without any change in your model source code. ## NVIDIA CUTLASS This extension is based on the [cutlass tilesparse](https://github.com/YulhwaKim/cutlass_tilesparse) proof of concept by [Yulhwa Kim](https://github.com/YulhwaKim). It is using **C++ CUDA templates** for block-sparse matrix multiplication based on **[CUTLASS](https://developer.nvidia.com/blog/cutlass-linear-algebra-cuda/)**. CUTLASS is a collection of CUDA C++ templates for implementing high-performance CUDA kernels. With CUTLASS, approching cuBLAS performance on custom kernels is possible without resorting to assembly language code. The latest versions include all the **Ampere Tensor Core primitives**, providing **x10 or more speedups** with a limited loss of precision. Next versions of pytorch_block_sparse will make use of these primitives, as block sparsity is 100% compatible with Tensor Cores requirements. ## Performance At the current stage of the library, the performances for sparse matrices are roughly two times slower than their cuBLAS optimized dense counterpart, and we are confident that we can improve this in the future. This is a huge improvement on PyTorch sparse matrices: their current implementation is an order of magnitude slower than the dense one. But the more important point is that the performance gain of using sparse matrices grows with the sparsity, so a **75% sparse matrix** is roughly **2x** faster than the dense equivalent. The memory savings are even more significant: for **75% sparsity**, memory consumption is reduced by **4x** as you would expect. ## Future work Being able to efficiently train block-sparse linear layers was just the first step. The sparsity pattern is currenly fixed at initialization, and of course optimizing it during learning will yield large improvements. So in future versions, you can expect tools to measure the "usefulness" of parameters to be able to **optimize the sparsity pattern**. **NVIDIA Ampere 50% sparse pattern** within blocks will probably yield another significant performance gain, just as upgrading to more recent versions of CUTLASS does. So, stay tuned for more sparsity goodness in a near future!
Transformer-based Encoder-Decoder Models
patrickvonplaten
October 10, 2020
encoder-decoder
research, nlp
https://huggingface.co/blog/encoder-decoder
# Transformers-based Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Encoder_Decoder_Model.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # **Transformer-based Encoder-Decoder Models** ```bash !pip install transformers==4.2.1 !pip install sentencepiece==0.1.95 ``` The *transformer-based* encoder-decoder model was introduced by Vaswani et al. in the famous [Attention is all you need paper](https://arxiv.org/abs/1706.03762) and is today the *de-facto* standard encoder-decoder architecture in natural language processing (NLP). Recently, there has been a lot of research on different *pre-training* objectives for transformer-based encoder-decoder models, *e.g.* T5, Bart, Pegasus, ProphetNet, Marge, *etc*\..., but the model architecture has stayed largely the same. The goal of the blog post is to give an **in-detail** explanation of **how** the transformer-based encoder-decoder architecture models *sequence-to-sequence* problems. We will focus on the mathematical model defined by the architecture and how the model can be used in inference. Along the way, we will give some background on sequence-to-sequence models in NLP and break down the *transformer-based* encoder-decoder architecture into its **encoder** and **decoder** parts. We provide many illustrations and establish the link between the theory of *transformer-based* encoder-decoder models and their practical usage in 🤗Transformers for inference. Note that this blog post does *not* explain how such models can be trained - this will be the topic of a future blog post. Transformer-based encoder-decoder models are the result of years of research on _representation learning_ and _model architectures_. This notebook provides a short summary of the history of neural encoder-decoder models. For more context, the reader is advised to read this awesome [blog post](https://ruder.io/a-review-of-the-recent-history-of-nlp/) by Sebastion Ruder. Additionally, a basic understanding of the _self-attention architecture_ is recommended. The following blog post by Jay Alammar serves as a good refresher on the original Transformer model [here](http://jalammar.github.io/illustrated-transformer/). At the time of writing this notebook, 🤗Transformers comprises the encoder-decoder models *T5*, *Bart*, *MarianMT*, and *Pegasus*, which are summarized in the docs under [model summaries](https://huggingface.co/transformers/model_summary.html#sequence-to-sequence-models). The notebook is divided into four parts: - **Background** - *A short history of neural encoder-decoder models is given with a focus on RNN-based models.* - **Encoder-Decoder** - *The transformer-based encoder-decoder model is presented and it is explained how the model is used for inference.* - **Encoder** - *The encoder part of the model is explained in detail.* - **Decoder** - *The decoder part of the model is explained in detail.* Each part builds upon the previous part, but can also be read on its own. ## **Background** Tasks in natural language generation (NLG), a subfield of NLP, are best expressed as sequence-to-sequence problems. Such tasks can be defined as finding a model that maps a sequence of input words to a sequence of target words. Some classic examples are *summarization* and *translation*. In the following, we assume that each word is encoded into a vector representation. \\(n\\) input words can then be represented as a sequence of \\(n\\) input vectors: $$\mathbf{X}_{1:n} = \{\mathbf{x}_1, \ldots, \mathbf{x}_n\}.$$ Consequently, sequence-to-sequence problems can be solved by finding a mapping \\(f\\) from an input sequence of \\(n\\) vectors \\(\mathbf{X}_{1:n}\\) to a sequence of \\(m\\) target vectors \\(\mathbf{Y}_{1:m}\\), whereas the number of target vectors \\(m\\) is unknown apriori and depends on the input sequence: $$ f: \mathbf{X}_{1:n} \to \mathbf{Y}_{1:m}. $$ [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) noted that deep neural networks (DNN)s, \"*despite their flexibility and power can only define a mapping whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality.*\" \\({}^1\\) Using a DNN model \\({}^2\\) to solve sequence-to-sequence problems would therefore mean that the number of target vectors \\(m\\) has to be known *apriori* and would have to be independent of the input \\(\mathbf{X}_{1:n}\\). This is suboptimal because, for tasks in NLG, the number of target words usually depends on the input \\(\mathbf{X}_{1:n}\\) and not just on the input length \\(n\\). *E.g.*, an article of 1000 words can be summarized to both 200 words and 100 words depending on its content. In 2014, [Cho et al.](https://arxiv.org/pdf/1406.1078.pdf) and [Sutskever et al.](https://arxiv.org/abs/1409.3215) proposed to use an encoder-decoder model purely based on recurrent neural networks (RNNs) for *sequence-to-sequence* tasks. In contrast to DNNS, RNNs are capable of modeling a mapping to a variable number of target vectors. Let\'s dive a bit deeper into the functioning of RNN-based encoder-decoder models. During inference, the encoder RNN encodes an input sequence \\(\mathbf{X}_{1:n}\\) by successively updating its *hidden state* \\({}^3\\). After having processed the last input vector \\(\mathbf{x}_n\\), the encoder\'s hidden state defines the input encoding \\(\mathbf{c}\\). Thus, the encoder defines the mapping: $$ f_{\theta_{enc}}: \mathbf{X}_{1:n} \to \mathbf{c}. $$ Then, the decoder\'s hidden state is initialized with the input encoding and during inference, the decoder RNN is used to auto-regressively generate the target sequence. Let\'s explain. Mathematically, the decoder defines the probability distribution of a target sequence \\(\mathbf{Y}_{1:m}\\) given the hidden state \\(\mathbf{c}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c}). $$ By Bayes\' rule the distribution can be decomposed into conditional distributions of single target vectors as follows: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c}) = \prod_{i=1}^{m} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}). $$ Thus, if the architecture can model the conditional distribution of the next target vector, given all previous target vectors: $$ p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}), \forall i \in \{1, \ldots, m\},$$ then it can model the distribution of any target vector sequence given the hidden state \\(\mathbf{c}\\) by simply multiplying all conditional probabilities. So how does the RNN-based decoder architecture model \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\)? In computational terms, the model sequentially maps the previous inner hidden state \\(\mathbf{c}_{i-1}\\) and the previous target vector \\(\mathbf{y}_{i-1}\\) to the current inner hidden state \\(\mathbf{c}_i\\) and a *logit vector* \\(\mathbf{l}_i\\) (shown in dark red below): $$ f_{\theta_{\text{dec}}}(\mathbf{y}_{i-1}, \mathbf{c}_{i-1}) \to \mathbf{l}_i, \mathbf{c}_i.$$ \\(\mathbf{c}_0\\) is thereby defined as \\(\mathbf{c}\\) being the output hidden state of the RNN-based encoder. Subsequently, the *softmax* operation is used to transform the logit vector \\(\mathbf{l}_i\\) to a conditional probablity distribution of the next target vector: $$ p(\mathbf{y}_i | \mathbf{l}_i) = \textbf{Softmax}(\mathbf{l}_i), \text{ with } \mathbf{l}_i = f_{\theta_{\text{dec}}}(\mathbf{y}_{i-1}, \mathbf{c}_{\text{prev}}). $$ For more detail on the logit vector and the resulting probability distribution, please see footnote \\({}^4\\). From the above equation, we can see that the distribution of the current target vector \\(\mathbf{y}_i\\) is directly conditioned on the previous target vector \\(\mathbf{y}_{i-1}\\) and the previous hidden state \\(\mathbf{c}_{i-1}\\). Because the previous hidden state \\(\mathbf{c}_{i-1}\\) depends on all previous target vectors \\(\mathbf{y}_0, \ldots, \mathbf{y}_{i-2}\\), it can be stated that the RNN-based decoder *implicitly* (*e.g.* *indirectly*) models the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\). The space of possible target vector sequences \\(\mathbf{Y}_{1:m}\\) is prohibitively large so that at inference, one has to rely on decoding methods \\({}^5\\) that efficiently sample high probability target vector sequences from \\(p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c})\\). Given such a decoding method, during inference, the next input vector \\(\mathbf{y}_i\\) can then be sampled from \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\) and is consequently appended to the input sequence so that the decoder RNN then models \\(p_{\theta_{\text{dec}}}(\mathbf{y}_{i+1} | \mathbf{Y}_{0: i}, \mathbf{c})\\) to sample the next input vector \\(\mathbf{y}_{i+1}\\) and so on in an *auto-regressive* fashion. An important feature of RNN-based encoder-decoder models is the definition of *special* vectors, such as the \\(\text{EOS}\\) and \\(\text{BOS}\\) vector. The \\(\text{EOS}\\) vector often represents the final input vector \\(\mathbf{x}_n\\) to \"cue\" the encoder that the input sequence has ended and also defines the end of the target sequence. As soon as the \\(\text{EOS}\\) is sampled from a logit vector, the generation is complete. The \\(\text{BOS}\\) vector represents the input vector \\(\mathbf{y}_0\\) fed to the decoder RNN at the very first decoding step. To output the first logit \\(\mathbf{l}_1\\), an input is required and since no input has been generated at the first step a special \\(\text{BOS}\\) input vector is fed to the decoder RNN. Ok - quite complicated! Let\'s illustrate and walk through an example. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/rnn_seq2seq.png) The unfolded RNN encoder is colored in green and the unfolded RNN decoder is colored in red. The English sentence \"I want to buy a car\", represented by \\(\mathbf{x}_1 = \text{I}\\), \\(\mathbf{x}_2 = \text{want}\\), \\(\mathbf{x}_3 = \text{to}\\), \\(\mathbf{x}_4 = \text{buy}\\), \\(\mathbf{x}_5 = \text{a}\\), \\(\mathbf{x}_6 = \text{car}\\) and \\(\mathbf{x}_7 = \text{EOS}\\) is translated into German: \"Ich will ein Auto kaufen\" defined as \\(\mathbf{y}_0 = \text{BOS}\\), \\(\mathbf{y}_1 = \text{Ich}\\), \\(\mathbf{y}_2 = \text{will}\\), \\(\mathbf{y}_3 = \text{ein}\\), \\(\mathbf{y}_4 = \text{Auto}, \mathbf{y}_5 = \text{kaufen}\\) and \\(\mathbf{y}_6=\text{EOS}\\). To begin with, the input vector \\(\mathbf{x}_1 = \text{I}\\) is processed by the encoder RNN and updates its hidden state. Note that because we are only interested in the final encoder\'s hidden state \\(\mathbf{c}\\), we can disregard the RNN encoder\'s target vector. The encoder RNN then processes the rest of the input sentence \\(\text{want}\\), \\(\text{to}\\), \\(\text{buy}\\), \\(\text{a}\\), \\(\text{car}\\), \\(\text{EOS}\\) in the same fashion, updating its hidden state at each step until the vector \\(\mathbf{x}_7={EOS}\\) is reached \\({}^6\\). In the illustration above the horizontal arrow connecting the unfolded encoder RNN represents the sequential updates of the hidden state. The final hidden state of the encoder RNN, represented by \\(\mathbf{c}\\) then completely defines the *encoding* of the input sequence and is used as the initial hidden state of the decoder RNN. This can be seen as *conditioning* the decoder RNN on the encoded input. To generate the first target vector, the decoder is fed the \\(\text{BOS}\\) vector, illustrated as \\(\mathbf{y}_0\\) in the design above. The target vector of the RNN is then further mapped to the logit vector \\(\mathbf{l}_1\\) by means of the *LM Head* feed-forward layer to define the conditional distribution of the first target vector as explained above: $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{c}). $$ The word \\(\text{Ich}\\) is sampled (shown by the grey arrow, connecting \\(\mathbf{l}_1\\) and \\(\mathbf{y}_1\\)) and consequently the second target vector can be sampled: $$ \text{will} \sim p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \text{Ich}, \mathbf{c}). $$ And so on until at step \\(i=6\\), the \\(\text{EOS}\\) vector is sampled from \\(\mathbf{l}_6\\) and the decoding is finished. The resulting target sequence amounts to \\(\mathbf{Y}_{1:6} = \{\mathbf{y}_1, \ldots, \mathbf{y}_6\}\\), which is \"Ich will ein Auto kaufen\" in our example above. To sum it up, an RNN-based encoder-decoder model, represented by \\(f_{\theta_{\text{enc}}}\\) and \\( p_{\theta_{\text{dec}}} \\) defines the distribution \\(p(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n})\\) by factorization: $$ p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{X}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}), \text{ with } \mathbf{c}=f_{\theta_{enc}}(X). $$ During inference, efficient decoding methods can auto-regressively generate the target sequence \\(\mathbf{Y}_{1:m}\\). The RNN-based encoder-decoder model took the NLG community by storm. In 2016, Google announced to fully replace its heavily feature engineered translation service by a single RNN-based encoder-decoder model (see [here](https://www.oreilly.com/radar/what-machine-learning-means-for-software-development/#:~:text=Machine%20learning%20is%20already%20making,of%20code%20in%20Google%20Translate.)). Nevertheless, RNN-based encoder-decoder models have two pitfalls. First, RNNs suffer from the vanishing gradient problem, making it very difficult to capture long-range dependencies, *cf.* [Hochreiter et al. (2001)](https://www.bioinf.jku.at/publications/older/ch7.pdf). Second, the inherent recurrent architecture of RNNs prevents efficient parallelization when encoding, *cf.* [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762). ------------------------------------------------------------------------ \\({}^1\\) The original quote from the paper is \"*Despite their flexibility and power, DNNs can only be applied to problems whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality*\", which is slightly adapted here. \\({}^2\\) The same holds essentially true for convolutional neural networks (CNNs). While an input sequence of variable length can be fed into a CNN, the dimensionality of the target will always be dependent on the input dimensionality or fixed to a specific value. \\({}^3\\) At the first step, the hidden state is initialized as a zero vector and fed to the RNN together with the first input vector \\(\mathbf{x}_1\\). \\({}^4\\) A neural network can define a probability distribution over all words, *i.e.* \\(p(\mathbf{y} | \mathbf{c}, \mathbf{Y}_{0: i-1})\\) as follows. First, the network defines a mapping from the inputs \\(\mathbf{c}, \mathbf{Y}_{0: i-1}\\) to an embedded vector representation \\(\mathbf{y'}\\), which corresponds to the RNN target vector. The embedded vector representation \\(\mathbf{y'}\\) is then passed to the \"language model head\" layer, which means that it is multiplied by the *word embedding matrix*, *i.e.* \\(\mathbf{Y}^{\text{vocab}}\\), so that a score between \\(\mathbf{y'}\\) and each encoded vector \\(\mathbf{y} \in \mathbf{Y}^{\text{vocab}}\\) is computed. The resulting vector is called the logit vector \\( \mathbf{l} = \mathbf{Y}^{\text{vocab}} \mathbf{y'} \\) and can be mapped to a probability distribution over all words by applying a softmax operation: \\(p(\mathbf{y} | \mathbf{c}) = \text{Softmax}(\mathbf{Y}^{\text{vocab}} \mathbf{y'}) = \text{Softmax}(\mathbf{l})\\). \\({}^5\\) Beam-search decoding is an example of such a decoding method. Different decoding methods are out of scope for this notebook. The reader is advised to refer to this [interactive notebook](https://huggingface.co/blog/how-to-generate) on decoding methods. \\({}^6\\) [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) reverses the order of the input so that in the above example the input vectors would correspond to \\(\mathbf{x}_1 = \text{car}\\), \\(\mathbf{x}_2 = \text{a}\\), \\(\mathbf{x}_3 = \text{buy}\\), \\(\mathbf{x}_4 = \text{to}\\), \\(\mathbf{x}_5 = \text{want}\\), \\(\mathbf{x}_6 = \text{I}\\) and \\(\mathbf{x}_7 = \text{EOS}\\). The motivation is to allow for a shorter connection between corresponding word pairs such as \\(\mathbf{x}_6 = \text{I}\\) and \\(\mathbf{y}_1 = \text{Ich}\\). The research group emphasizes that the reversal of the input sequence was a key reason for their model\'s improved performance on machine translation. ## **Encoder-Decoder** In 2017, Vaswani et al. introduced the **Transformer** and thereby gave birth to *transformer-based* encoder-decoder models. Analogous to RNN-based encoder-decoder models, transformer-based encoder-decoder models consist of an encoder and a decoder which are both stacks of *residual attention blocks*. The key innovation of transformer-based encoder-decoder models is that such residual attention blocks can process an input sequence \\(\mathbf{X}_{1:n}\\) of variable length \\(n\\) without exhibiting a recurrent structure. Not relying on a recurrent structure allows transformer-based encoder-decoders to be highly parallelizable, which makes the model orders of magnitude more computationally efficient than RNN-based encoder-decoder models on modern hardware. As a reminder, to solve a *sequence-to-sequence* problem, we need to find a mapping of an input sequence \\(\mathbf{X}_{1:n}\\) to an output sequence \\(\mathbf{Y}_{1:m}\\) of variable length \\(m\\). Let\'s see how transformer-based encoder-decoder models are used to find such a mapping. Similar to RNN-based encoder-decoder models, the transformer-based encoder-decoder models define a conditional distribution of target vectors \\(\mathbf{Y}_{1:n}\\) given an input sequence \\(\mathbf{X}_{1:n}\\): $$ p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}). $$ The transformer-based encoder part encodes the input sequence \\(\mathbf{X}_{1:n}\\) to a *sequence* of *hidden states* \\(\mathbf{\overline{X}}_{1:n}\\), thus defining the mapping: $$ f_{\theta_{\text{enc}}}: \mathbf{X}_{1:n} \to \mathbf{\overline{X}}_{1:n}. $$ The transformer-based decoder part then models the conditional probability distribution of the target vector sequence \\(\mathbf{Y}_{1:n}\\) given the sequence of encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:n} | \mathbf{\overline{X}}_{1:n}).$$ By Bayes\' rule, this distribution can be factorized to a product of conditional probability distribution of the target vector \\(\mathbf{y}_i\\) given the encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\) and all previous target vectors \\(\mathbf{Y}_{0:i-1}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:n} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{n} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}). $$ The transformer-based decoder hereby maps the sequence of encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\) and all previous target vectors \\(\mathbf{Y}_{0:i-1}\\) to the *logit* vector \\(\mathbf{l}_i\\). The logit vector \\(\mathbf{l}_i\\) is then processed by the *softmax* operation to define the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\), just as it is done for RNN-based decoders. However, in contrast to RNN-based decoders, the distribution of the target vector \\(\mathbf{y}_i\\) is *explicitly* (or directly) conditioned on all previous target vectors \\(\mathbf{y}_0, \ldots, \mathbf{y}_{i-1}\\) as we will see later in more detail. The 0th target vector \\(\mathbf{y}_0\\) is hereby represented by a special \"begin-of-sentence\" \\(\text{BOS}\\) vector. Having defined the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\), we can now *auto-regressively* generate the output and thus define a mapping of an input sequence \\(\mathbf{X}_{1:n}\\) to an output sequence \\(\mathbf{Y}_{1:m}\\) at inference. Let\'s visualize the complete process of *auto-regressive* generation of *transformer-based* encoder-decoder models. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/EncoderDecoder.png) The transformer-based encoder is colored in green and the transformer-based decoder is colored in red. As in the previous section, we show how the English sentence \"I want to buy a car\", represented by \\(\mathbf{x}_1 = \text{I}\\), \\(\mathbf{x}_2 = \text{want}\\), \\(\mathbf{x}_3 = \text{to}\\), \\(\mathbf{x}_4 = \text{buy}\\), \\(\mathbf{x}_5 = \text{a}\\), \\(\mathbf{x}_6 = \text{car}\\), and \\(\mathbf{x}_7 = \text{EOS}\\) is translated into German: \"Ich will ein Auto kaufen\" defined as \\(\mathbf{y}_0 = \text{BOS}\\), \\(\mathbf{y}_1 = \text{Ich}\\), \\(\mathbf{y}_2 = \text{will}\\), \\(\mathbf{y}_3 = \text{ein}\\), \\(\mathbf{y}_4 = \text{Auto}, \mathbf{y}_5 = \text{kaufen}\\), and \\(\mathbf{y}_6=\text{EOS}\\). To begin with, the encoder processes the complete input sequence \\(\mathbf{X}_{1:7}\\) = \"I want to buy a car\" (represented by the light green vectors) to a contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:7}\\). *E.g.* \\(\mathbf{\overline{x}}_4\\) defines an encoding that depends not only on the input \\(\mathbf{x}_4\\) = \"buy\", but also on all other words \"I\", \"want\", \"to\", \"a\", \"car\" and \"EOS\", *i.e.* the context. Next, the input encoding \\(\mathbf{\overline{X}}_{1:7}\\) together with the BOS vector, *i.e.* \\(\mathbf{y}_0\\), is fed to the decoder. The decoder processes the inputs \\(\mathbf{\overline{X}}_{1:7}\\) and \\(\mathbf{y}_0\\) to the first logit \\(\mathbf{l}_1\\) (shown in darker red) to define the conditional distribution of the first target vector \\(\mathbf{y}_1\\): $$ p_{\theta_{enc, dec}}(\mathbf{y} | \mathbf{y}_0, \mathbf{X}_{1:7}) = p_{\theta_{enc, dec}}(\mathbf{y} | \text{BOS}, \text{I want to buy a car EOS}) = p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{\overline{X}}_{1:7}). $$ Next, the first target vector \\(\mathbf{y}_1\\) = \\(\text{Ich}\\) is sampled from the distribution (represented by the grey arrows) and can now be fed to the decoder again. The decoder now processes both \\(\mathbf{y}_0\\) = \"BOS\" and \\(\mathbf{y}_1\\) = \"Ich\" to define the conditional distribution of the second target vector \\(\mathbf{y}_2\\): $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich}, \mathbf{\overline{X}}_{1:7}). $$ We can sample again and produce the target vector \\(\mathbf{y}_2\\) = \"will\". We continue in auto-regressive fashion until at step 6 the EOS vector is sampled from the conditional distribution: $$ \text{EOS} \sim p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ And so on in auto-regressive fashion. It is important to understand that the encoder is only used in the first forward pass to map \\(\mathbf{X}_{1:n}\\) to \\(\mathbf{\overline{X}}_{1:n}\\). As of the second forward pass, the decoder can directly make use of the previously calculated encoding \\(\mathbf{\overline{X}}_{1:n}\\). For clarity, let\'s illustrate the first and the second forward pass for our example above. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/EncoderDecoder_step_by_step.png) As can be seen, only in step \\(i=1\\) do we have to encode \"I want to buy a car EOS\" to \\(\mathbf{\overline{X}}_{1:7}\\). At step \\(i=2\\), the contextualized encodings of \"I want to buy a car EOS\" are simply reused by the decoder. In 🤗Transformers, this auto-regressive generation is done under-the-hood when calling the `.generate()` method. Let\'s use one of our translation models to see this in action. ```python from transformers import MarianMTModel, MarianTokenizer tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # translate example output_ids = model.generate(input_ids)[0] # decode and print print(tokenizer.decode(output_ids)) ``` _Output:_ ``` <pad> Ich will ein Auto kaufen ``` Calling `.generate()` does many things under-the-hood. First, it passes the `input_ids` to the encoder. Second, it passes a pre-defined token, which is the \\(\text{<pad>}\\) symbol in the case of `MarianMTModel` along with the encoded `input_ids` to the decoder. Third, it applies the beam search decoding mechanism to auto-regressively sample the next output word of the *last* decoder output \\({}^1\\). For more detail on how beam search decoding works, one is advised to read [this](https://huggingface.co/blog/how-to-generate) blog post. In the Appendix, we have included a code snippet that shows how a simple generation method can be implemented \"from scratch\". To fully understand how *auto-regressive* generation works under-the-hood, it is highly recommended to read the Appendix. To sum it up: - The transformer-based encoder defines a mapping from the input sequence \\(\mathbf{X}_{1:n}\\) to a contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\). - The transformer-based decoder defines the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\). - Given an appropriate decoding mechanism, the output sequence \\(\mathbf{Y}_{1:m}\\) can auto-regressively be sampled from \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}), \forall i \in \{1, \ldots, m\}\\). Great, now that we have gotten a general overview of how *transformer-based* encoder-decoder models work, we can dive deeper into both the encoder and decoder part of the model. More specifically, we will see exactly how the encoder makes use of the self-attention layer to yield a sequence of context-dependent vector encodings and how self-attention layers allow for efficient parallelization. Then, we will explain in detail how the self-attention layer works in the decoder model and how the decoder is conditioned on the encoder\'s output with *cross-attention* layers to define the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\). Along, the way it will become obvious how transformer-based encoder-decoder models solve the long-range dependencies problem of RNN-based encoder-decoder models. ------------------------------------------------------------------------ \\({}^1\\) In the case of `"Helsinki-NLP/opus-mt-en-de"`, the decoding parameters can be accessed [here](https://s3.amazonaws.com/models.huggingface.co/bert/Helsinki-NLP/opus-mt-en-de/config.json), where we can see that model applies beam search with `num_beams=6`. ## **Encoder** As mentioned in the previous section, the *transformer-based* encoder maps the input sequence to a contextualized encoding sequence: $$ f_{\theta_{\text{enc}}}: \mathbf{X}_{1:n} \to \mathbf{\overline{X}}_{1:n}. $$ Taking a closer look at the architecture, the transformer-based encoder is a stack of residual _encoder blocks_. Each encoder block consists of a **bi-directional** self-attention layer, followed by two feed-forward layers. For simplicity, we disregard the normalization layers in this notebook. Also, we will not further discuss the role of the two feed-forward layers, but simply see it as a final vector-to-vector mapping required in each encoder block \\({}^1\\). The bi-directional self-attention layer puts each input vector \\(\mathbf{x'}_j, \forall j \in \{1, \ldots, n\}\\) into relation with all input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_n\\) and by doing so transforms the input vector \\(\mathbf{x'}_j\\) to a more \"refined\" contextual representation of itself, defined as \\(\mathbf{x''}_j\\). Thereby, the first encoder block transforms each input vector of the input sequence \\(\mathbf{X}_{1:n}\\) (shown in light green below) from a *context-independent* vector representation to a *context-dependent* vector representation, and the following encoder blocks further refine this contextual representation until the last encoder block outputs the final contextual encoding \\(\mathbf{\overline{X}}_{1:n}\\) (shown in darker green below). Let\'s visualize how the encoder processes the input sequence \"I want to buy a car EOS\" to a contextualized encoding sequence. Similar to RNN-based encoders, transformer-based encoders also add a special \"end-of-sequence\" input vector to the input sequence to hint to the model that the input vector sequence is finished \\({}^2\\). ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/Encoder_block.png) Our exemplary *transformer-based* encoder is composed of three encoder blocks, whereas the second encoder block is shown in more detail in the red box on the right for the first three input vectors \\(\mathbf{x}_1, \mathbf{x}_2 and \mathbf{x}_3\\). The bi-directional self-attention mechanism is illustrated by the fully-connected graph in the lower part of the red box and the two feed-forward layers are shown in the upper part of the red box. As stated before, we will focus only on the bi-directional self-attention mechanism. As can be seen each output vector of the self-attention layer \\(\mathbf{x''}_i, \forall i \in \{1, \ldots, 7\}\\) depends *directly* on *all* input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_7\\). This means, *e.g.* that the input vector representation of the word \"want\", *i.e.* \\(\mathbf{x'}_2\\), is put into direct relation with the word \"buy\", *i.e.* \\(\mathbf{x'}_4\\), but also with the word \"I\",*i.e.* \\(\mathbf{x'}_1\\). The output vector representation of \"want\", *i.e.* \\(\mathbf{x''}_2\\), thus represents a more refined contextual representation for the word \"want\". Let\'s take a deeper look at how bi-directional self-attention works. Each input vector \\(\mathbf{x'}_i\\) of an input sequence \\(\mathbf{X'}_{1:n}\\) of an encoder block is projected to a key vector \\(\mathbf{k}_i\\), value vector \\(\mathbf{v}_i\\) and query vector \\(\mathbf{q}_i\\) (shown in orange, blue, and purple respectively below) through three trainable weight matrices \\(\mathbf{W}_q, \mathbf{W}_v, \mathbf{W}_k\\): $$ \mathbf{q}_i = \mathbf{W}_q \mathbf{x'}_i,$$ $$ \mathbf{v}_i = \mathbf{W}_v \mathbf{x'}_i,$$ $$ \mathbf{k}_i = \mathbf{W}_k \mathbf{x'}_i, $$ $$ \forall i \in \{1, \ldots n \}.$$ Note, that the **same** weight matrices are applied to each input vector \\(\mathbf{x}_i, \forall i \in \{i, \ldots, n\}\\). After projecting each input vector \\(\mathbf{x}_i\\) to a query, key, and value vector, each query vector \\(\mathbf{q}_j, \forall j \in \{1, \ldots, n\}\\) is compared to all key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\). The more similar one of the key vectors \\(\mathbf{k}_1, \ldots \mathbf{k}_n\\) is to a query vector \\(\mathbf{q}_j\\), the more important is the corresponding value vector \\(\mathbf{v}_j\\) for the output vector \\(\mathbf{x''}_j\\). More specifically, an output vector \\(\mathbf{x''}_j\\) is defined as the weighted sum of all value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_n\\) plus the input vector \\(\mathbf{x'}_j\\). Thereby, the weights are proportional to the cosine similarity between \\(\mathbf{q}_j\\) and the respective key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\), which is mathematically expressed by \\(\textbf{Softmax}(\mathbf{K}_{1:n}^\intercal \mathbf{q}_j)\\) as illustrated in the equation below. For a complete description of the self-attention layer, the reader is advised to take a look at [this](http://jalammar.github.io/illustrated-transformer/) blog post or the original [paper](https://arxiv.org/abs/1706.03762). Alright, this sounds quite complicated. Let\'s illustrate the bi-directional self-attention layer for one of the query vectors of our example above. For simplicity, it is assumed that our exemplary *transformer-based* decoder uses only a single attention head `config.num_heads = 1` and that no normalization is applied. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/encoder_detail.png) On the left, the previously illustrated second encoder block is shown again and on the right, an in detail visualization of the bi-directional self-attention mechanism is given for the second input vector \\(\mathbf{x'}_2\\) that corresponds to the input word \"want\". At first all input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_7\\) are projected to their respective query vectors \\(\mathbf{q}_1, \ldots, \mathbf{q}_7\\) (only the first three query vectors are shown in purple above), value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_7\\) (shown in blue), and key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_7\\) (shown in orange). The query vector \\(\mathbf{q}_2\\) is then multiplied by the transpose of all key vectors, *i.e.* \\(\mathbf{K}_{1:7}^{\intercal}\\) followed by the softmax operation to yield the _self-attention weights_. The self-attention weights are finally multiplied by the respective value vectors and the input vector \\(\mathbf{x'}_2\\) is added to output the \"refined\" representation of the word \"want\", *i.e.* \\(\mathbf{x''}_2\\) (shown in dark green on the right). The whole equation is illustrated in the upper part of the box on the right. The multiplication of \\(\mathbf{K}_{1:7}^{\intercal}\\) and \\(\mathbf{q}_2\\) thereby makes it possible to compare the vector representation of \"want\" to all other input vector representations \"I\", \"to\", \"buy\", \"a\", \"car\", \"EOS\" so that the self-attention weights mirror the importance each of the other input vector representations \\(\mathbf{x'}_j \text{, with } j \ne 2\\) for the refined representation \\(\mathbf{x''}_2\\) of the word \"want\". To further understand the implications of the bi-directional self-attention layer, let\'s assume the following sentence is processed: \"*The house is beautiful and well located in the middle of the city where it is easily accessible by public transport*\". The word \"it\" refers to \"house\", which is 12 \"positions away\". In transformer-based encoders, the bi-directional self-attention layer performs a single mathematical operation to put the input vector of \"house\" into relation with the input vector of \"it\" (compare to the first illustration of this section). In contrast, in an RNN-based encoder, a word that is 12 \"positions away\", would require at least 12 mathematical operations meaning that in an RNN-based encoder a linear number of mathematical operations are required. This makes it much harder for an RNN-based encoder to model long-range contextual representations. Also, it becomes clear that a transformer-based encoder is much less prone to lose important information than an RNN-based encoder-decoder model because the sequence length of the encoding is kept the same, *i.e.* \\(\textbf{len}(\mathbf{X}_{1:n}) = \textbf{len}(\mathbf{\overline{X}}_{1:n}) = n\\), while an RNN compresses the length from \\(*\textbf{len}((\mathbf{X}_{1:n}) = n\\) to just \\(\textbf{len}(\mathbf{c}) = 1\\), which makes it very difficult for RNNs to effectively encode long-range dependencies between input words. In addition to making long-range dependencies more easily learnable, we can see that the Transformer architecture is able to process text in parallel.Mathematically, this can easily be shown by writing the self-attention formula as a product of query, key, and value matrices: $$\mathbf{X''}_{1:n} = \mathbf{V}_{1:n} \text{Softmax}(\mathbf{Q}_{1:n}^\intercal \mathbf{K}_{1:n}) + \mathbf{X'}_{1:n}. $$ The output \\(\mathbf{X''}_{1:n} = \mathbf{x''}_1, \ldots, \mathbf{x''}_n\\) is computed via a series of matrix multiplications and a softmax operation, which can be parallelized effectively. Note, that in an RNN-based encoder model, the computation of the hidden state \\(\mathbf{c}\\) has to be done sequentially: Compute hidden state of the first input vector \\(\mathbf{x}_1\\), then compute the hidden state of the second input vector that depends on the hidden state of the first hidden vector, etc. The sequential nature of RNNs prevents effective parallelization and makes them much more inefficient compared to transformer-based encoder models on modern GPU hardware. Great, now we should have a better understanding of a) how transformer-based encoder models effectively model long-range contextual representations and b) how they efficiently process long sequences of input vectors. Now, let\'s code up a short example of the encoder part of our `MarianMT` encoder-decoder models to verify that the explained theory holds in practice. ------------------------------------------------------------------------ \\({}^1\\) An in-detail explanation of the role the feed-forward layers play in transformer-based models is out-of-scope for this notebook. It is argued in [Yun et. al, (2017)](https://arxiv.org/pdf/1912.10077.pdf) that feed-forward layers are crucial to map each contextual vector \\(\mathbf{x'}_i\\) individually to the desired output space, which the _self-attention_ layer does not manage to do on its own. It should be noted here, that each output token \\(\mathbf{x'}\\) is processed by the same feed-forward layer. For more detail, the reader is advised to read the paper. \\({}^2\\) However, the EOS input vector does not have to be appended to the input sequence, but has been shown to improve performance in many cases. In contrast to the _0th_ \\(\text{BOS}\\) target vector of the transformer-based decoder is required as a starting input vector to predict a first target vector. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") embeddings = model.get_input_embeddings() # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # pass input_ids to encoder encoder_hidden_states = model.base_model.encoder(input_ids, return_dict=True).last_hidden_state # change the input slightly and pass to encoder input_ids_perturbed = tokenizer("I want to buy a house", return_tensors="pt").input_ids encoder_hidden_states_perturbed = model.base_model.encoder(input_ids_perturbed, return_dict=True).last_hidden_state # compare shape and encoding of first vector print(f"Length of input embeddings {embeddings(input_ids).shape[1]}. Length of encoder_hidden_states {encoder_hidden_states.shape[1]}") # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `I` equal to its perturbed version?: ", torch.allclose(encoder_hidden_states[0, 0], encoder_hidden_states_perturbed[0, 0], atol=1e-3)) ``` _Outputs:_ ``` Length of input embeddings 7. Length of encoder_hidden_states 7 Is encoding for `I` equal to its perturbed version?: False ``` We compare the length of the input word embeddings, *i.e.* `embeddings(input_ids)` corresponding to \\(\mathbf{X}_{1:n}\\), with the length of the `encoder_hidden_states`, corresponding to \\(\mathbf{\overline{X}}_{1:n}\\). Also, we have forwarded the word sequence \"I want to buy a car\" and a slightly perturbated version \"I want to buy a house\" through the encoder to check if the first output encoding, corresponding to \"I\", differs when only the last word is changed in the input sequence. As expected the output length of the input word embeddings and encoder output encodings, *i.e.* \\(\textbf{len}(\mathbf{X}_{1:n})\\) and \\(\textbf{len}(\mathbf{\overline{X}}_{1:n})\\), is equal. Second, it can be noted that the values of the encoded output vector of \\(\mathbf{\overline{x}}_1 = \text{"I"}\\) are different when the last word is changed from \"car\" to \"house\". This however should not come as a surprise if one has understood bi-directional self-attention. On a side-note, _autoencoding_ models, such as BERT, have the exact same architecture as _transformer-based_ encoder models. _Autoencoding_ models leverage this architecture for massive self-supervised pre-training on open-domain text data so that they can map any word sequence to a deep bi-directional representation. In [Devlin et al. (2018)](https://arxiv.org/abs/1810.04805), the authors show that a pre-trained BERT model with a single task-specific classification layer on top can achieve SOTA results on eleven NLP tasks. All *autoencoding* models of 🤗Transformers can be found [here](https://huggingface.co/transformers/model_summary.html#autoencoding-models). ## **Decoder** As mentioned in the *Encoder-Decoder* section, the *transformer-based* decoder defines the conditional probability distribution of a target sequence given the contextualized encoding sequence: $$ p_{\theta_{dec}}(\mathbf{Y}_{1: m} | \mathbf{\overline{X}}_{1:n}), $$ which by Bayes\' rule can be decomposed into a product of conditional distributions of the next target vector given the contextualized encoding sequence and all previous target vectors: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}). $$ Let\'s first understand how the transformer-based decoder defines a probability distribution. The transformer-based decoder is a stack of *decoder blocks* followed by a dense layer, the \"LM head\". The stack of decoder blocks maps the contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\) and a target vector sequence prepended by the \\(\text{BOS}\\) vector and cut to the last target vector, *i.e.* \\(\mathbf{Y}_{0:i-1}\\), to an encoded sequence of target vectors \\(\mathbf{\overline{Y}}_{0: i-1}\\). Then, the \"LM head\" maps the encoded sequence of target vectors \\(\mathbf{\overline{Y}}_{0: i-1}\\) to a sequence of logit vectors \\(\mathbf{L}_{1:n} = \mathbf{l}_1, \ldots, \mathbf{l}_n\\), whereas the dimensionality of each logit vector \\(\mathbf{l}_i\\) corresponds to the size of the vocabulary. This way, for each \\(i \in \{1, \ldots, n\}\\) a probability distribution over the whole vocabulary can be obtained by applying a softmax operation on \\(\mathbf{l}_i\\). These distributions define the conditional distribution: $$p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}), \forall i \in \{1, \ldots, n\},$$ respectively. The \"LM head\" is often tied to the transpose of the word embedding matrix, *i.e.* \\(\mathbf{W}_{\text{emb}}^{\intercal} = \left[\mathbf{y}^1, \ldots, \mathbf{y}^{\text{vocab}}\right]^{\intercal}\\) \\({}^1\\). Intuitively this means that for all \\(i \in \{0, \ldots, n - 1\}\\) the \"LM Head\" layer compares the encoded output vector \\(\mathbf{\overline{y}}_i\\) to all word embeddings in the vocabulary \\(\mathbf{y}^1, \ldots, \mathbf{y}^{\text{vocab}}\\) so that the logit vector \\(\mathbf{l}_{i+1}\\) represents the similarity scores between the encoded output vector and each word embedding. The softmax operation simply transformers the similarity scores to a probability distribution. For each \\(i \in \{1, \ldots, n\}\\), the following equations hold: $$ p_{\theta_{dec}}(\mathbf{y} | \mathbf{\overline{X}}_{1:n}, \mathbf{Y}_{0:i-1})$$ $$ = \text{Softmax}(f_{\theta_{\text{dec}}}(\mathbf{\overline{X}}_{1:n}, \mathbf{Y}_{0:i-1}))$$ $$ = \text{Softmax}(\mathbf{W}_{\text{emb}}^{\intercal} \mathbf{\overline{y}}_{i-1})$$ $$ = \text{Softmax}(\mathbf{l}_i). $$ Putting it all together, in order to model the conditional distribution of a target vector sequence \\(\mathbf{Y}_{1: m}\\), the target vectors \\(\mathbf{Y}_{1:m-1}\\) prepended by the special \\(\text{BOS}\\) vector, *i.e.* \\(\mathbf{y}_0\\), are first mapped together with the contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\) to the logit vector sequence \\(\mathbf{L}_{1:m}\\). Consequently, each logit target vector \\(\mathbf{l}_i\\) is transformed into a conditional probability distribution of the target vector \\(\mathbf{y}_i\\) using the softmax operation. Finally, the conditional probabilities of all target vectors \\(\mathbf{y}_1, \ldots, \mathbf{y}_m\\) multiplied together to yield the conditional probability of the complete target vector sequence: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}).$$ In contrast to transformer-based encoders, in transformer-based decoders, the encoded output vector \\(\mathbf{\overline{y}}_i\\) should be a good representation of the *next* target vector \\(\mathbf{y}_{i+1}\\) and not of the input vector itself. Additionally, the encoded output vector \\(\mathbf{\overline{y}}_i\\) should be conditioned on all contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\). To meet these requirements each decoder block consists of a **uni-directional** self-attention layer, followed by a **cross-attention** layer and two feed-forward layers \\({}^2\\). The uni-directional self-attention layer puts each of its input vectors \\(\mathbf{y'}_j\\) only into relation with all previous input vectors \\(\mathbf{y'}_i, \text{ with } i \le j\\) for all \\(j \in \{1, \ldots, n\}\\) to model the probability distribution of the next target vectors. The cross-attention layer puts each of its input vectors \\(\mathbf{y''}_j\\) into relation with all contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\) to condition the probability distribution of the next target vectors on the input of the encoder as well. Alright, let\'s visualize the *transformer-based* decoder for our English to German translation example. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/encoder_decoder_detail.png) We can see that the decoder maps the input \\(\mathbf{Y}_{0:5}\\) \"BOS\", \"Ich\", \"will\", \"ein\", \"Auto\", \"kaufen\" (shown in light red) together with the contextualized sequence of \"I\", \"want\", \"to\", \"buy\", \"a\", \"car\", \"EOS\", *i.e.* \\(\mathbf{\overline{X}}_{1:7}\\) (shown in dark green) to the logit vectors \\(\mathbf{L}_{1:6}\\) (shown in dark red). Applying a softmax operation on each \\(\mathbf{l}_1, \mathbf{l}_2, \ldots, \mathbf{l}_5\\) can thus define the conditional probability distributions: $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{\overline{X}}_{1:7}), $$ $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich}, \mathbf{\overline{X}}_{1:7}), $$ $$ \ldots, $$ $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ The overall conditional probability of: $$ p_{\theta_{dec}}(\text{Ich will ein Auto kaufen EOS} | \mathbf{\overline{X}}_{1:n})$$ can therefore be computed as the following product: $$ p_{\theta_{dec}}(\text{Ich} | \text{BOS}, \mathbf{\overline{X}}_{1:7}) \times \ldots \times p_{\theta_{dec}}(\text{EOS} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ The red box on the right shows a decoder block for the first three target vectors \\(\mathbf{y}_0, \mathbf{y}_1, \mathbf{y}_2\\). In the lower part, the uni-directional self-attention mechanism is illustrated and in the middle, the cross-attention mechanism is illustrated. Let\'s first focus on uni-directional self-attention. As in bi-directional self-attention, in uni-directional self-attention, the query vectors \\(\mathbf{q}_0, \ldots, \mathbf{q}_{m-1}\\) (shown in purple below), key vectors \\(\mathbf{k}_0, \ldots, \mathbf{k}_{m-1}\\) (shown in orange below), and value vectors \\(\mathbf{v}_0, \ldots, \mathbf{v}_{m-1}\\) (shown in blue below) are projected from their respective input vectors \\(\mathbf{y'}_0, \ldots, \mathbf{y'}_{m-1}\\) (shown in light red below). However, in uni-directional self-attention, each query vector \\(\mathbf{q}_i\\) is compared *only* to its respective key vector and all previous ones, namely \\(\mathbf{k}_0, \ldots, \mathbf{k}_i\\) to yield the respective *attention weights*. This prevents an output vector \\(\mathbf{y''}_j\\) (shown in dark red below) to include any information about the following input vector \\(\mathbf{y}_i, \text{ with } i > j\\) for all \\(j \in \{0, \ldots, m - 1 \}\\). As is the case in bi-directional self-attention, the attention weights are then multiplied by their respective value vectors and summed together. We can summarize uni-directional self-attention as follows: $$\mathbf{y''}_i = \mathbf{V}_{0: i} \textbf{Softmax}(\mathbf{K}_{0: i}^\intercal \mathbf{q}_i) + \mathbf{y'}_i. $$ Note that the index range of the key and value vectors is \\(0:i\\) instead of \\(0: m-1\\) which would be the range of the key vectors in bi-directional self-attention. Let\'s illustrate uni-directional self-attention for the input vector \\(\mathbf{y'}_1\\) for our example above. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/causal_attn.png) As can be seen \\(\mathbf{y''}_1\\) only depends on \\(\mathbf{y'}_0\\) and \\(\mathbf{y'}_1\\). Therefore, we put the vector representation of the word \"Ich\", *i.e.* \\(\mathbf{y'}_1\\) only into relation with itself and the \"BOS\" target vector, *i.e.* \\(\mathbf{y'}_0\\), but **not** with the vector representation of the word \"will\", *i.e.* \\(\mathbf{y'}_2\\). So why is it important that we use uni-directional self-attention in the decoder instead of bi-directional self-attention? As stated above, a transformer-based decoder defines a mapping from a sequence of input vector \\(\mathbf{Y}_{0: m-1}\\) to the logits corresponding to the **next** decoder input vectors, namely \\(\mathbf{L}_{1:m}\\). In our example, this means, *e.g.* that the input vector \\(\mathbf{y}_1\\) = \"Ich\" is mapped to the logit vector \\(\mathbf{l}_2\\), which is then used to predict the input vector \\(\mathbf{y}_2\\). Thus, if \\(\mathbf{y'}_1\\) would have access to the following input vectors \\(\mathbf{Y'}_{2:5}\\), the decoder would simply copy the vector representation of \"will\", *i.e.* \\(\mathbf{y'}_2\\), to be its output \\(\mathbf{y''}_1\\). This would be forwarded to the last layer so that the encoded output vector \\(\mathbf{\overline{y}}_1\\) would essentially just correspond to the vector representation \\(\mathbf{y}_2\\). This is obviously disadvantageous as the transformer-based decoder would never learn to predict the next word given all previous words, but just copy the target vector \\(\mathbf{y}_i\\) through the network to \\(\mathbf{\overline{y}}_{i-1}\\) for all \\(i \in \{1, \ldots, m \}\\). In order to define a conditional distribution of the next target vector, the distribution cannot be conditioned on the next target vector itself. It does not make much sense to predict \\(\mathbf{y}_i\\) from \\(p(\mathbf{y} | \mathbf{Y}_{0:i}, \mathbf{\overline{X}})\\) because the distribution is conditioned on the target vector it is supposed to model. The uni-directional self-attention architecture, therefore, allows us to define a *causal* probability distribution, which is necessary to effectively model a conditional distribution of the next target vector. Great! Now we can move to the layer that connects the encoder and decoder - the *cross-attention* mechanism! The cross-attention layer takes two vector sequences as inputs: the outputs of the uni-directional self-attention layer, *i.e.* \\(\mathbf{Y''}_{0: m-1}\\) and the contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\). As in the self-attention layer, the query vectors \\(\mathbf{q}_0, \ldots, \mathbf{q}_{m-1}\\) are projections of the output vectors of the previous layer, *i.e.* \\(\mathbf{Y''}_{0: m-1}\\). However, the key and value vectors \\(\mathbf{k}_0, \ldots, \mathbf{k}_{m-1}\\) and \\(\mathbf{v}_0, \ldots, \mathbf{v}_{m-1}\\) are projections of the contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\). Having defined key, value, and query vectors, a query vector \\(\mathbf{q}_i\\) is then compared to *all* key vectors and the corresponding score is used to weight the respective value vectors, just as is the case for *bi-directional* self-attention to give the output vector \\(\mathbf{y'''}_i\\) for all \\(i \in {0, \ldots, m-1}\\). Cross-attention can be summarized as follows: $$ \mathbf{y'''}_i = \mathbf{V}_{1:n} \textbf{Softmax}(\mathbf{K}_{1: n}^\intercal \mathbf{q}_i) + \mathbf{y''}_i. $$ Note that the index range of the key and value vectors is \\(1:n\\) corresponding to the number of contextualized encoding vectors. Let\'s visualize the cross-attention mechanism for the input vector \\(\mathbf{y''}_1\\) for our example above. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/cross_attention.png) We can see that the query vector \\(\mathbf{q}_1\\) (shown in purple) is derived from \\(\mathbf{y''}_1\\)(shown in red) and therefore relies on a vector representation of the word \"Ich\". The query vector \\(\mathbf{q}_1\\) is then compared to the key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_7\\) (shown in yellow) corresponding to the contextual encoding representation of all encoder input vectors \\(\mathbf{X}_{1:n}\\) = \"I want to buy a car EOS\". This puts the vector representation of \"Ich\" into direct relation with all encoder input vectors. Finally, the attention weights are multiplied by the value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_7\\) (shown in turquoise) to yield in addition to the input vector \\(\mathbf{y''}_1\\) the output vector \\(\mathbf{y'''}_1\\) (shown in dark red). So intuitively, what happens here exactly? Each output vector \\(\mathbf{y'''}_i\\) is a weighted sum of all value projections of the encoder inputs \\(\mathbf{v}_{1}, \ldots, \mathbf{v}_7\\) plus the input vector itself \\(\mathbf{y''}_i\\) (*c.f.* illustrated formula above). The key mechanism to understand is the following: Depending on how similar a query projection of the *input decoder vector* \\(\mathbf{q}_i\\) is to a key projection of the *encoder input vector* \\(\mathbf{k}_j\\), the more important is the value projection of the encoder input vector \\(\mathbf{v}_j\\). In loose terms this means, the more \"related\" a decoder input representation is to an encoder input representation, the more does the input representation influence the decoder output representation. Cool! Now we can see how this architecture nicely conditions each output vector \\(\mathbf{y'''}_i\\) on the interaction between the encoder input vectors \\(\mathbf{\overline{X}}_{1:n}\\) and the input vector \\(\mathbf{y''}_i\\). Another important observation at this point is that the architecture is completely independent of the number \\(n\\) of contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\) on which the output vector \\(\mathbf{y'''}_i\\) is conditioned on. All projection matrices \\(\mathbf{W}^{\text{cross}}_{k}\\) and \\(\mathbf{W}^{\text{cross}}_{v}\\) to derive the key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\) and the value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_n\\) respectively are shared across all positions \\(1, \ldots, n\\) and all value vectors \\( \mathbf{v}_1, \ldots, \mathbf{v}_n \\) are summed together to a single weighted averaged vector. Now it becomes obvious as well, why the transformer-based decoder does not suffer from the long-range dependency problem, the RNN-based decoder suffers from. Because each decoder logit vector is *directly* dependent on every single encoded output vector, the number of mathematical operations to compare the first encoded output vector and the last decoder logit vector amounts essentially to just one. To conclude, the uni-directional self-attention layer is responsible for conditioning each output vector on all previous decoder input vectors and the current input vector and the cross-attention layer is responsible to further condition each output vector on all encoded input vectors. To verify our theoretical understanding, let\'s continue our code example from the encoder section above. ------------------------------------------------------------------------ \\({}^1\\) The word embedding matrix \\(\mathbf{W}_{\text{emb}}\\) gives each input word a unique *context-independent* vector representation. This matrix is often fixed as the \"LM Head\" layer. However, the \"LM Head\" layer can very well consist of a completely independent \"encoded vector-to-logit\" weight mapping. \\({}^2\\) Again, an in-detail explanation of the role the feed-forward layers play in transformer-based models is out-of-scope for this notebook. It is argued in [Yun et. al, (2017)](https://arxiv.org/pdf/1912.10077.pdf) that feed-forward layers are crucial to map each contextual vector \\(\mathbf{x'}_i\\) individually to the desired output space, which the *self-attention* layer does not manage to do on its own. It should be noted here, that each output token \\(\mathbf{x'}\\) is processed by the same feed-forward layer. For more detail, the reader is advised to read the paper. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") embeddings = model.get_input_embeddings() # create token ids for encoder input input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # pass input token ids to encoder encoder_output_vectors = model.base_model.encoder(input_ids, return_dict=True).last_hidden_state # create token ids for decoder input decoder_input_ids = tokenizer("<pad> Ich will ein", return_tensors="pt", add_special_tokens=False).input_ids # pass decoder input ids and encoded input vectors to decoder decoder_output_vectors = model.base_model.decoder(decoder_input_ids, encoder_hidden_states=encoder_output_vectors).last_hidden_state # derive embeddings by multiplying decoder outputs with embedding weights lm_logits = torch.nn.functional.linear(decoder_output_vectors, embeddings.weight, bias=model.final_logits_bias) # change the decoder input slightly decoder_input_ids_perturbed = tokenizer("<pad> Ich will das", return_tensors="pt", add_special_tokens=False).input_ids decoder_output_vectors_perturbed = model.base_model.decoder(decoder_input_ids_perturbed, encoder_hidden_states=encoder_output_vectors).last_hidden_state lm_logits_perturbed = torch.nn.functional.linear(decoder_output_vectors_perturbed, embeddings.weight, bias=model.final_logits_bias) # compare shape and encoding of first vector print(f"Shape of decoder input vectors {embeddings(decoder_input_ids).shape}. Shape of decoder logits {lm_logits.shape}") # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `Ich` equal to its perturbed version?: ", torch.allclose(lm_logits[0, 0], lm_logits_perturbed[0, 0], atol=1e-3)) ``` _Output:_ ``` Shape of decoder input vectors torch.Size([1, 5, 512]). Shape of decoder logits torch.Size([1, 5, 58101]) Is encoding for `Ich` equal to its perturbed version?: True ``` We compare the output shape of the decoder input word embeddings, *i.e.* `embeddings(decoder_input_ids)` (corresponds to \\(\mathbf{Y}_{0: 4}\\), here `<pad>` corresponds to BOS and \"Ich will das\" is tokenized to 4 tokens) with the dimensionality of the `lm_logits`(corresponds to \\(\mathbf{L}_{1:5}\\)). Also, we have passed the word sequence \"`<pad>` Ich will ein\" and a slightly perturbated version \"`<pad>` Ich will das\" together with the `encoder_output_vectors` through the decoder to check if the second `lm_logit`, corresponding to \"Ich\", differs when only the last word is changed in the input sequence (\"ein\" -\> \"das\"). As expected the output shapes of the decoder input word embeddings and lm\_logits, *i.e.* the dimensionality of \\(\mathbf{Y}_{0: 4}\\) and \\(\mathbf{L}_{1:5}\\) are different in the last dimension. While the sequence length is the same (=5), the dimensionality of a decoder input word embedding corresponds to `model.config.hidden_size`, whereas the dimensionality of a `lm_logit` corresponds to the vocabulary size `model.config.vocab_size`, as explained above. Second, it can be noted that the values of the encoded output vector of \\(\mathbf{l}_1 = \text{"Ich"}\\) are the same when the last word is changed from \"ein\" to \"das\". This however should not come as a surprise if one has understood uni-directional self-attention. On a final side-note, _auto-regressive_ models, such as GPT2, have the same architecture as _transformer-based_ decoder models **if** one removes the cross-attention layer because stand-alone auto-regressive models are not conditioned on any encoder outputs. So auto-regressive models are essentially the same as *auto-encoding* models but replace bi-directional attention with uni-directional attention. These models can also be pre-trained on massive open-domain text data to show impressive performances on natural language generation (NLG) tasks. In [Radford et al. (2019)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), the authors show that a pre-trained GPT2 model can achieve SOTA or close to SOTA results on a variety of NLG tasks without much fine-tuning. All *auto-regressive* models of 🤗Transformers can be found [here](https://huggingface.co/transformers/model_summary.html#autoregressive-models). Alright, that\'s it! Now, you should have gotten a good understanding of *transformer-based* encoder-decoder models and how to use them with the 🤗Transformers library. Thanks a lot to Victor Sanh, Sasha Rush, Sam Shleifer, Oliver Åstrand, ‪Ted Moskovitz and Kristian Kyvik for giving valuable feedback. ## **Appendix** As mentioned above, the following code snippet shows how one can program a simple generation method for *transformer-based* encoder-decoder models. Here, we implement a simple *greedy* decoding method using `torch.argmax` to sample the target vector. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # create BOS token decoder_input_ids = tokenizer("<pad>", add_special_tokens=False, return_tensors="pt").input_ids assert decoder_input_ids[0, 0].item() == model.config.decoder_start_token_id, "`decoder_input_ids` should correspond to `model.config.decoder_start_token_id`" # STEP 1 # pass input_ids to encoder and to decoder and pass BOS token to decoder to retrieve first logit outputs = model(input_ids, decoder_input_ids=decoder_input_ids, return_dict=True) # get encoded sequence encoded_sequence = (outputs.encoder_last_hidden_state,) # get logits lm_logits = outputs.logits # sample last token with highest prob next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) # concat decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # STEP 2 # reuse encoded_inputs and pass BOS + "Ich" to decoder to second logit lm_logits = model(None, encoder_outputs=encoded_sequence, decoder_input_ids=decoder_input_ids, return_dict=True).logits # sample last token with highest prob again next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) # concat again decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # STEP 3 lm_logits = model(None, encoder_outputs=encoded_sequence, decoder_input_ids=decoder_input_ids, return_dict=True).logits next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # let's see what we have generated so far! print(f"Generated so far: {tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)}") # This can be written in a loop as well. ``` _Outputs:_ ``` Generated so far: Ich will ein ``` In this code example, we show exactly what was described earlier. We pass an input \"I want to buy a car\" together with the \\(\text{BOS}\\) token to the encoder-decoder model and sample from the first logit \\(\mathbf{l}_1\\) (*i.e.* the first `lm_logits` line). Hereby, our sampling strategy is simple: greedily choose the next decoder input vector that has the highest probability. In an auto-regressive fashion, we then pass the sampled decoder input vector together with the previous inputs to the encoder-decoder model and sample again. We repeat this a third time. As a result, the model has generated the words \"Ich will ein\". The result is spot-on - this is the beginning of the correct translation of the input. In practice, more complicated decoding methods are used to sample the `lm_logits`. Most of which are covered in [this](https://huggingface.co/blog/how-to-generate) blog post.
Hyperparameter Search with Transformers and Ray Tune
ray-project
November 2, 2020
ray-tune
open-source-collab, nlp
https://huggingface.co/blog/ray-tune
# Hyperparameter Search with Transformers and Ray Tune ##### A guest blog post by Richard Liaw from the Anyscale team With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face [transformers](https://github.com/huggingface/transformers) library has become critical to the success and growth of natural language processing today. For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone ([1](https://medium.com/@prakashakshay90/fine-tuning-bert-model-using-pytorch-f34148d58a37), [2](https://mccormickml.com/2019/07/22/BERT-fine-tuning/#advantages-of-fine-tuning)) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space. However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is [a recent experiment run on a BERT](https://medium.com/distributed-computing-with-ray/hyperparameter-optimization-for-transformers-a-guide-c4e32c6c989b) model from [Hugging Face transformers](https://github.com/huggingface/transformers) on the [RTE dataset](https://aclweb.org/aclwiki/Textual_Entailment_Resource_Pool). Genetic optimization techniques like [PBT](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining) can provide large performance improvements compared to standard hyperparameter optimization techniques. <table> <tr> <td><strong>Algorithm</strong> </td> <td><strong>Best Val Acc.</strong> </td> <td><strong>Best Test Acc.</strong> </td> <td><strong>Total GPU min</strong> </td> <td><strong>Total $ cost</strong> </td> </tr> <tr> <td>Grid Search </td> <td>74% </td> <td>65.4% </td> <td>45 min </td> <td>$2.30 </td> </tr> <tr> <td>Bayesian Optimization +Early Stop </td> <td>77% </td> <td>66.9% </td> <td>104 min </td> <td>$5.30 </td> </tr> <tr> <td>Population-based Training </td> <td>78% </td> <td>70.5% </td> <td>48 min </td> <td>$2.45 </td> </tr> </table> If you’re leveraging [Transformers](https://github.com/huggingface/transformers), you’ll want to have a way to easily access powerful hyperparameter tuning solutions without giving up the customizability of the Transformers framework. ![alt_text](/blog/assets/06_ray_tune/ray-hf.jpg "image_tooltip") In the Transformers 3.1 release, [Hugging Face Transformers](https://github.com/huggingface/transformers) and [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) teamed up to provide a simple yet powerful integration. [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) is a popular Python library for hyperparameter tuning that provides many state-of-the-art algorithms out of the box, along with integrations with the best-of-class tooling, such as [Weights and Biases](https://wandb.ai/) and tensorboard. To demonstrate this new [Hugging Face](https://github.com/huggingface/transformers) + [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) integration, we leverage the [Hugging Face Datasets library](https://github.com/huggingface/datasets) to fine tune BERT on [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398). To run this example, please first run: **`pip install "ray[tune]" transformers datasets scipy sklearn torch`** Simply plug in one of Ray’s standard tuning algorithms by just adding a few lines of code. ```python from datasets import load_dataset, load_metric from transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments) tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer( examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained( 'distilbert-base-uncased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often # than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments( "test", evaluation_strategy="steps", eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) # Default objective is the sum of all metrics # when metrics are provided, so we have to maximize it. trainer.hyperparameter_search( direction="maximize", backend="ray", n_trials=10 # number of trials ) ``` By default, each trial will utilize 1 CPU, and optionally 1 GPU if available. You can leverage multiple [GPUs for a parallel hyperparameter search](https://docs.ray.io/en/latest/tune/user-guide.html#resources-parallelism-gpus-distributed) by passing in a `resources_per_trial` argument. You can also easily swap different parameter tuning algorithms such as [HyperBand](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler), [Bayesian Optimization](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html), [Population-Based Training](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining): To run this example, first run: **`pip install hyperopt`** ```python from ray.tune.suggest.hyperopt import HyperOptSearch from ray.tune.schedulers import ASHAScheduler trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) best_trial = trainer.hyperparameter_search( direction="maximize", backend="ray", # Choose among many libraries: # https://docs.ray.io/en/latest/tune/api_docs/suggestion.html search_alg=HyperOptSearch(metric="objective", mode="max"), # Choose among schedulers: # https://docs.ray.io/en/latest/tune/api_docs/schedulers.html scheduler=ASHAScheduler(metric="objective", mode="max")) ``` It also works with [Weights and Biases](https://wandb.ai/) out of the box! ![alt_text](/blog/assets/06_ray_tune/ray-wandb.png "image_tooltip") ### Try it out today: * `pip install -U ray` * `pip install -U transformers datasets` * Check out the [Hugging Face documentation](https://huggingface.co/transformers/) and [Discussion thread](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/10) * [End-to-end example of using Hugging Face hyperparameter search for text classification](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) If you liked this blog post, be sure to check out: * [Transformers + GLUE + Ray Tune example](https://docs.ray.io/en/latest/tune/examples/index.html#hugging-face-huggingface-transformers) * Our [Weights and Biases report](https://wandb.ai/amogkam/transformers/reports/Hyperparameter-Optimization-for-Huggingface-Transformers--VmlldzoyMTc2ODI) on Hyperparameter Optimization for Transformers * The [simplest way to serve your NLP model](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) from scratch
Porting fairseq wmt19 translation system to transformers
stas
November 3, 2020
porting-fsmt
open-source-collab, nlp
https://huggingface.co/blog/porting-fsmt
" # Porting fairseq wmt19 translation system to transformers ##### A guest blog post by Stas Bekma(...TRUNCATED)
Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models
patrickvonplaten
November 09, 2020
warm-starting-encoder-decoder
guide, nlp
https://huggingface.co/blog/warm-starting-encoder-decoder
" # Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models <a target=\"_blan(...TRUNCATED)
How we sped up transformer inference 100x for 🤗 API customers
Narsil
January 18, 2021
accelerated-inference
analysis, nlp
https://huggingface.co/blog/accelerated-inference
" # How we sped up transformer inference 100x for 🤗 API customers 🤗 Transformers has become (...TRUNCATED)
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
stas
January 19, 2021
zero-deepspeed-fairscale
guide
https://huggingface.co/blog/zero-deepspeed-fairscale
" # Fit More and Train Faster With ZeRO via DeepSpeed and FairScale ##### A guest blog post by Hug(...TRUNCATED)

Hugging Face Blog Content..

Downloads last month
0
Edit dataset card

Space using chenglu/hf-blogs 1