jiajiahong2134 julien-c HF staff commited on
Commit
1ef249a
0 Parent(s):

Duplicate from distilbert/distilgpt2

Browse files

Co-authored-by: Julien Chaumond <julien-c@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.h5 filter=lfs diff=lfs merge=lfs -text
5
+ *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
64.tflite ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7df15c10bc1a025f321ea6da7c1a16a443093737ad61a48c3586c5e40c50eb10
3
+ size 325310836
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+
6
+ license: apache-2.0
7
+ datasets:
8
+ - openwebtext
9
+
10
+ model-index:
11
+ - name: distilgpt2
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ type: wikitext
18
+ name: WikiText-103
19
+ metrics:
20
+ - type: perplexity
21
+ name: Perplexity
22
+ value: 21.1
23
+
24
+ co2_eq_emissions: 149200
25
+ ---
26
+
27
+ # DistilGPT2
28
+
29
+ DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
30
+
31
+ ## Model Details
32
+
33
+ - **Developed by:** Hugging Face
34
+ - **Model type:** Transformer-based Language Model
35
+ - **Language:** English
36
+ - **License:** Apache 2.0
37
+ - **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
38
+ - **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
39
+
40
+ ## Uses, Limitations and Risks
41
+
42
+ #### Limitations and Risks
43
+
44
+ <details>
45
+ <summary>Click to expand</summary>
46
+
47
+ **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
48
+
49
+ As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
50
+
51
+ DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
52
+
53
+ The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
54
+
55
+ - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
56
+ - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
57
+ - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
58
+
59
+ ```python
60
+ >>> from transformers import pipeline, set_seed
61
+ >>> generator = pipeline('text-generation', model='distilgpt2')
62
+ >>> set_seed(48)
63
+ >>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
64
+ [{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
65
+ {'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
66
+ {'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
67
+
68
+ >>> set_seed(48)
69
+ >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
70
+ [{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
71
+ {'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
72
+ {'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
73
+ ```
74
+
75
+ </details>
76
+
77
+ #### Potential Uses
78
+
79
+ Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
80
+
81
+ The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
82
+
83
+ > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
84
+ > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
85
+ > - *Entertainment: Creation of games, chat bots, and amusing generations.*
86
+
87
+ Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
88
+
89
+ #### Out-of-scope Uses
90
+
91
+ OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
92
+
93
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
94
+ >
95
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
96
+
97
+ ### How to Get Started with the Model
98
+
99
+ <details>
100
+ <summary>Click to expand</summary>
101
+
102
+ *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
103
+
104
+ Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
105
+
106
+ ```python
107
+ >>> from transformers import pipeline, set_seed
108
+ >>> generator = pipeline('text-generation', model='distilgpt2')
109
+ >>> set_seed(42)
110
+ >>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
111
+ Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
112
+ [{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
113
+ {'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
114
+ {'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
115
+ {'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
116
+ {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
117
+ ```
118
+
119
+ Here is how to use this model to get the features of a given text in PyTorch:
120
+
121
+ ```python
122
+ from transformers import GPT2Tokenizer, GPT2Model
123
+ tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
124
+ model = GPT2Model.from_pretrained('distilgpt2')
125
+ text = "Replace me by any text you'd like."
126
+ encoded_input = tokenizer(text, return_tensors='pt')
127
+ output = model(**encoded_input)
128
+ ```
129
+
130
+ And in TensorFlow:
131
+
132
+ ```python
133
+ from transformers import GPT2Tokenizer, TFGPT2Model
134
+ tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
135
+ model = TFGPT2Model.from_pretrained('distilgpt2')
136
+ text = "Replace me by any text you'd like."
137
+ encoded_input = tokenizer(text, return_tensors='tf')
138
+ output = model(encoded_input)
139
+ ```
140
+
141
+ </details>
142
+
143
+ ## Training Data
144
+
145
+ DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
146
+
147
+ ## Training Procedure
148
+
149
+ The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
150
+
151
+ ## Evaluation Results
152
+
153
+ The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
154
+
155
+ ## Environmental Impact
156
+
157
+ *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
158
+
159
+ - **Hardware Type:** 8 16GB V100
160
+ - **Hours used:** 168 (1 week)
161
+ - **Cloud Provider:** Azure
162
+ - **Compute Region:** unavailable, assumed East US for calculations
163
+ - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
164
+
165
+ ## Citation
166
+
167
+ ```bibtex
168
+ @inproceedings{sanh2019distilbert,
169
+ title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
170
+ author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
171
+ booktitle={NeurIPS EMC^2 Workshop},
172
+ year={2019}
173
+ }
174
+ ```
175
+
176
+ ## Glossary
177
+
178
+ - <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
179
+
180
+ <a href="https://huggingface.co/exbert/?model=distilgpt2">
181
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
182
+ </a>
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_num_labels": 1,
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "label2id": {
16
+ "LABEL_0": 0
17
+ },
18
+ "layer_norm_epsilon": 1e-05,
19
+ "model_type": "gpt2",
20
+ "n_ctx": 1024,
21
+ "n_embd": 768,
22
+ "n_head": 12,
23
+ "n_layer": 6,
24
+ "n_positions": 1024,
25
+ "resid_pdrop": 0.1,
26
+ "summary_activation": null,
27
+ "summary_first_dropout": 0.1,
28
+ "summary_proj_to_labels": true,
29
+ "summary_type": "cls_index",
30
+ "summary_use_proj": true,
31
+ "task_specific_params": {
32
+ "text-generation": {
33
+ "do_sample": true,
34
+ "max_length": 50
35
+ }
36
+ },
37
+ "vocab_size": 50257
38
+ }
coreml/text-generation/float32_model.mlpackage/Data/com.apple.CoreML/model.mlmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d19fa891e34f314c61a5d7262a61ae187664b5ef5e8113a9b32962c792676d2f
3
+ size 1240147
coreml/text-generation/float32_model.mlpackage/Data/com.apple.CoreML/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb5eff8ff72219ddda6f919aa8623afa6cb2a96e732bf2e604c93e1e14b8df00
3
+ size 484212356
coreml/text-generation/float32_model.mlpackage/Manifest.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileFormatVersion": "1.0.0",
3
+ "itemInfoEntries": {
4
+ "7E122326-3AF0-4ED0-9356-53237403FF17": {
5
+ "author": "com.apple.CoreML",
6
+ "description": "CoreML Model Specification",
7
+ "name": "model.mlmodel",
8
+ "path": "com.apple.CoreML/model.mlmodel"
9
+ },
10
+ "A1FAC8DB-7C40-4725-969C-A2491FFF24E2": {
11
+ "author": "com.apple.CoreML",
12
+ "description": "CoreML Model Weights",
13
+ "name": "weights",
14
+ "path": "com.apple.CoreML/weights"
15
+ }
16
+ },
17
+ "rootModelIdentifier": "7E122326-3AF0-4ED0-9356-53237403FF17"
18
+ }
coreml_model.mlmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c0ee43d6d4be21bc3cef1f44035fefaa96962fd05be39570ea268e4a5ce11bc
3
+ size 482254328
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3b7fcf75195b7c4d8a73bf26f8b1344f2186bdcd3715f04e0c04ae76d5931be
3
+ size 327652826
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.27.0.dev0"
6
+ }
generation_config_for_text_generation.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "do_sample": true,
5
+ "eos_token_id": 50256,
6
+ "max_length": 50,
7
+ "transformers_version": "4.27.0.dev0"
8
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1ff18884359fe8beb795a5f414feb85a6ce3d929ad019c0d958c039d2b94a1b
3
+ size 352824413
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecbb4e22dd2b9dcc43b2622e1b87ebb9361fb31e496b98ea01a38785c1dbaa01
3
+ size 352833716
rust_model.ot ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bf6e122f504e97feec8978d500d6cdb572606ad80e6daf388b96e0de7f2ddba
3
+ size 507225049
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a1186d966d5e57054fddc1eb6377cb9b08aea866d07059f4a3e6eec5535b879
3
+ size 327744160
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"model_max_length": 1024}
vocab.json ADDED
The diff for this file is too large to render. See raw diff