Update README.md
#3
by
Huong
- opened
README.md
CHANGED
@@ -1,163 +1,62 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
|
26 |
-
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
|
27 |
-
it was trained to guess the next word in sentences.
|
28 |
-
|
29 |
-
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
|
30 |
-
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
|
31 |
-
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
|
32 |
-
|
33 |
-
This way, the model learns an inner representation of the English language that can then be used to extract features
|
34 |
-
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
|
35 |
-
prompt.
|
36 |
-
|
37 |
-
## Intended uses & limitations
|
38 |
-
|
39 |
-
You can use the raw model for text generation or fine-tune it to a downstream task. See the
|
40 |
-
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
|
41 |
-
|
42 |
-
### How to use
|
43 |
-
|
44 |
-
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
|
45 |
-
set a seed for reproducibility:
|
46 |
-
|
47 |
-
```python
|
48 |
-
>>> from transformers import pipeline, set_seed
|
49 |
-
>>> generator = pipeline('text-generation', model='gpt2')
|
50 |
-
>>> set_seed(42)
|
51 |
-
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
|
52 |
-
|
53 |
-
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
|
54 |
-
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
|
55 |
-
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
|
56 |
-
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
|
57 |
-
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
|
58 |
-
```
|
59 |
|
60 |
Here is how to use this model to get the features of a given text in PyTorch:
|
61 |
|
62 |
-
|
63 |
-
from transformers import GPT2Tokenizer, GPT2Model
|
64 |
-
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
65 |
-
model = GPT2Model.from_pretrained('gpt2')
|
66 |
-
text = "Replace me by any text you'd like."
|
67 |
-
encoded_input = tokenizer(text, return_tensors='pt')
|
68 |
-
output = model(**encoded_input)
|
69 |
-
```
|
70 |
|
71 |
-
|
|
|
72 |
|
73 |
-
|
74 |
-
from transformers import GPT2Tokenizer, TFGPT2Model
|
75 |
-
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
76 |
-
model = TFGPT2Model.from_pretrained('gpt2')
|
77 |
text = "Replace me by any text you'd like."
|
78 |
-
encoded_input = tokenizer(text, return_tensors='
|
79 |
-
output = model(encoded_input)
|
80 |
-
```
|
81 |
-
|
82 |
-
### Limitations and bias
|
83 |
-
|
84 |
-
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
|
85 |
-
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
|
86 |
-
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
|
87 |
-
|
88 |
-
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
|
89 |
-
> that require the generated text to be true.
|
90 |
-
>
|
91 |
-
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
|
92 |
-
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
|
93 |
-
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
|
94 |
-
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
|
95 |
-
> levels of caution around use cases that are sensitive to biases around human attributes.
|
96 |
-
|
97 |
-
Here's an example of how the model can have biased predictions:
|
98 |
-
|
99 |
-
```python
|
100 |
-
>>> from transformers import pipeline, set_seed
|
101 |
-
>>> generator = pipeline('text-generation', model='gpt2')
|
102 |
-
>>> set_seed(42)
|
103 |
-
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
|
104 |
-
|
105 |
-
[{'generated_text': 'The White man worked as a mannequin for'},
|
106 |
-
{'generated_text': 'The White man worked as a maniser of the'},
|
107 |
-
{'generated_text': 'The White man worked as a bus conductor by day'},
|
108 |
-
{'generated_text': 'The White man worked as a plumber at the'},
|
109 |
-
{'generated_text': 'The White man worked as a journalist. He had'}]
|
110 |
-
|
111 |
-
>>> set_seed(42)
|
112 |
-
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
|
113 |
-
|
114 |
-
[{'generated_text': 'The Black man worked as a man at a restaurant'},
|
115 |
-
{'generated_text': 'The Black man worked as a car salesman in a'},
|
116 |
-
{'generated_text': 'The Black man worked as a police sergeant at the'},
|
117 |
-
{'generated_text': 'The Black man worked as a man-eating monster'},
|
118 |
-
{'generated_text': 'The Black man worked as a slave, and was'}]
|
119 |
-
```
|
120 |
-
|
121 |
-
This bias will also affect all fine-tuned versions of this model.
|
122 |
-
|
123 |
-
## Training data
|
124 |
-
|
125 |
-
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
|
126 |
-
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
|
127 |
-
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
|
128 |
-
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
|
129 |
-
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
|
130 |
-
|
131 |
-
## Training procedure
|
132 |
-
|
133 |
-
### Preprocessing
|
134 |
-
|
135 |
-
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
|
136 |
-
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
|
137 |
-
|
138 |
-
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
|
139 |
-
details of training.
|
140 |
-
|
141 |
-
## Evaluation results
|
142 |
-
|
143 |
-
The model achieves the following results without any fine-tuning (zero-shot):
|
144 |
-
|
145 |
-
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|
146 |
-
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
|
147 |
-
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
|
148 |
-
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
|
149 |
-
|
150 |
-
|
151 |
-
### BibTeX entry and citation info
|
152 |
|
153 |
-
|
154 |
-
|
155 |
-
title={Language Models are Unsupervised Multitask Learners},
|
156 |
-
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
|
157 |
-
year={2019}
|
158 |
-
}
|
159 |
-
```
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from transformers import pipeline
|
2 |
+
unmasker = pipeline('fill-mask', model='xlm-roberta-base')
|
3 |
+
unmasker("Hello I'm a <mask> model.")
|
4 |
+
|
5 |
+
[{'score': 0.10563907772302628,
|
6 |
+
'sequence': "Hello I'm a fashion model.",
|
7 |
+
'token': 54543,
|
8 |
+
'token_str': 'fashion'},
|
9 |
+
{'score': 0.08015287667512894,
|
10 |
+
'sequence': "Hello I'm a new model.",
|
11 |
+
'token': 3525,
|
12 |
+
'token_str': 'new'},
|
13 |
+
{'score': 0.033413201570510864,
|
14 |
+
'sequence': "Hello I'm a model model.",
|
15 |
+
'token': 3299,
|
16 |
+
'token_str': 'model'},
|
17 |
+
{'score': 0.030217764899134636,
|
18 |
+
'sequence': "Hello I'm a French model.",
|
19 |
+
'token': 92265,
|
20 |
+
'token_str': 'French'},
|
21 |
+
{'score': 0.026436051353812218,
|
22 |
+
'sequence': "Hello I'm a sexy model.",
|
23 |
+
'token': 17473,
|
24 |
+
'token_str': 'sexy'}]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
Here is how to use this model to get the features of a given text in PyTorch:
|
27 |
|
28 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
|
31 |
+
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")
|
32 |
|
33 |
+
# prepare input
|
|
|
|
|
|
|
34 |
text = "Replace me by any text you'd like."
|
35 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
# forward pass
|
38 |
+
output = model(**encoded_input)
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
BibTeX entry and citation info
|
41 |
+
@article{DBLP:journals/corr/abs-1911-02116,
|
42 |
+
author = {Alexis Conneau and
|
43 |
+
Kartikay Khandelwal and
|
44 |
+
Naman Goyal and
|
45 |
+
Vishrav Chaudhary and
|
46 |
+
Guillaume Wenzek and
|
47 |
+
Francisco Guzm{\'{a}}n and
|
48 |
+
Edouard Grave and
|
49 |
+
Myle Ott and
|
50 |
+
Luke Zettlemoyer and
|
51 |
+
Veselin Stoyanov},
|
52 |
+
title = {Unsupervised Cross-lingual Representation Learning at Scale},
|
53 |
+
journal = {CoRR},
|
54 |
+
volume = {abs/1911.02116},
|
55 |
+
year = {2019},
|
56 |
+
url = {http://arxiv.org/abs/1911.02116},
|
57 |
+
eprinttype = {arXiv},
|
58 |
+
eprint = {1911.02116},
|
59 |
+
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
|
60 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
|
61 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
62 |
+
}
|