# gpt2

 1 --- 2 language: en 3 tags: 4 - exbert 5 6 license: mit 7 --- 8 9 10 # GPT-2 11 12 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large 13 14 Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in 15 [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) 16 and first released at [this page](https://openai.com/blog/better-language-models/). 17 18 Disclaimer: The team releasing GPT-2 also wrote a 19 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card 20 has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. 21 22 ## Model description 23 24 GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This 25 means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots 26 of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, 27 it was trained to guess the next word in sentences. 28 29 More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, 30 shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the 31 predictions for the token i only uses the inputs from 1 to i but not the future tokens. 32 33 This way, the model learns an inner representation of the English language that can then be used to extract features 34 useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a 35 prompt. 36 37 ## Intended uses & limitations 38 39 You can use the raw model for text generation or fine-tune it to a downstream task. See the 40 [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. 41 42 ### How to use 43 44 You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we 45 set a seed for reproducibility: 46 47 python 48 >>> from transformers import pipeline, set_seed 49 >>> generator = pipeline('text-generation', model='gpt2') 50 >>> set_seed(42) 51 >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) 52 53 [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, 54  {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, 55  {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, 56  {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, 57  {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] 58  59 60 Here is how to use this model to get the features of a given text in PyTorch: 61 62 python 63 from transformers import GPT2Tokenizer, GPT2Model 64 tokenizer = GPT2Tokenizer.from_pretrained('gpt2') 65 model = GPT2Model.from_pretrained('gpt2') 66 text = "Replace me by any text you'd like." 67 encoded_input = tokenizer(text, return_tensors='pt') 68 output = model(**encoded_input) 69  70 71 and in TensorFlow: 72 73 python 74 from transformers import GPT2Tokenizer, TFGPT2Model 75 tokenizer = GPT2Tokenizer.from_pretrained('gpt2') 76 model = TFGPT2Model.from_pretrained('gpt2') 77 text = "Replace me by any text you'd like." 78 encoded_input = tokenizer(text, return_tensors='tf') 79 output = model(encoded_input) 80  81 82 ### Limitations and bias 83 84 The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of 85 unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their 86 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): 87 88 > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases 89 > that require the generated text to be true. 90 > 91 > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do 92 > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a 93 > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, 94 > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar 95 > levels of caution around use cases that are sensitive to biases around human attributes. 96 97 Here's an example of how the model can have biased predictions: 98 99 python 100 >>> from transformers import pipeline, set_seed 101 >>> generator = pipeline('text-generation', model='gpt2') 102 >>> set_seed(42) 103 >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) 104 105 [{'generated_text': 'The White man worked as a mannequin for'}, 106  {'generated_text': 'The White man worked as a maniser of the'}, 107  {'generated_text': 'The White man worked as a bus conductor by day'}, 108  {'generated_text': 'The White man worked as a plumber at the'}, 109  {'generated_text': 'The White man worked as a journalist. He had'}] 110 111 >>> set_seed(42) 112 >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) 113 114 [{'generated_text': 'The Black man worked as a man at a restaurant'}, 115  {'generated_text': 'The Black man worked as a car salesman in a'}, 116  {'generated_text': 'The Black man worked as a police sergeant at the'}, 117  {'generated_text': 'The Black man worked as a man-eating monster'}, 118  {'generated_text': 'The Black man worked as a slave, and was'}] 119  120 121 This bias will also affect all fine-tuned versions of this model. 122 123 ## Training data 124 125 The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web 126 pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from 127 this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 128 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText 129 [here](https://github.com/openai/gpt-2/blob/master/domains.txt). 130 131 ## Training procedure 132 133 ### Preprocessing 134 135 The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a 136 vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. 137 138 The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact 139 details of training. 140 141 ## Evaluation results 142 143 The model achieves the following results without any fine-tuning (zero-shot): 144 145 | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | 146 |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| 147 | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | 148 | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | 149 150 151 ### BibTeX entry and citation info 152 153 bibtex 154 @article{radford2019language, 155  title={Language Models are Unsupervised Multitask Learners}, 156  author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, 157  year={2019} 158 } 159  160 161  162   163  164