Marissa Gerchick commited on
Commit
2d7a5d8
1 Parent(s): 3deddb6

Small section rearranging

Browse files
Files changed (1) hide show
  1. README.md +24 -24
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  name: Perplexity
22
  value: 21.1
23
 
24
- co2_eq_emissions: 149.2 kg
25
  ---
26
 
27
  # DistilGPT2
@@ -74,7 +74,27 @@ The impact of model compression techniques – such as knowledge distillation
74
 
75
  </details>
76
 
77
- #### How to Get Started with the Model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  <details>
80
  <summary>Click to expand</summary>
@@ -120,26 +140,6 @@ output = model(encoded_input)
120
 
121
  </details>
122
 
123
- #### Potential Uses
124
-
125
- Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
126
-
127
- The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
128
-
129
- > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
130
- > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
131
- > - *Entertainment: Creation of games, chat bots, and amusing generations.*
132
-
133
- Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
134
-
135
- #### Out-of-scope Uses
136
-
137
- OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
138
-
139
- > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
140
- >
141
- > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
142
-
143
  ## Training Data
144
 
145
  DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
@@ -152,9 +152,9 @@ The texts were tokenized using the same tokenizer as GPT-2, a byte-level version
152
 
153
  The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
154
 
155
- ## Carbon Emissions
156
 
157
- *Emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
158
 
159
  - **Hardware Type:** 8 16GB V100
160
  - **Hours used:** 168 (1 week)
 
21
  name: Perplexity
22
  value: 21.1
23
 
24
+ co2_eq_emissions: 149200 g
25
  ---
26
 
27
  # DistilGPT2
 
74
 
75
  </details>
76
 
77
+ #### Potential Uses
78
+
79
+ Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
80
+
81
+ The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
82
+
83
+ > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
84
+ > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
85
+ > - *Entertainment: Creation of games, chat bots, and amusing generations.*
86
+
87
+ Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
88
+
89
+ #### Out-of-scope Uses
90
+
91
+ OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
92
+
93
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
94
+ >
95
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
96
+
97
+ ### How to Get Started with the Model
98
 
99
  <details>
100
  <summary>Click to expand</summary>
 
140
 
141
  </details>
142
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  ## Training Data
144
 
145
  DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
 
152
 
153
  The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
154
 
155
+ ## Environmental Impact
156
 
157
+ *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
158
 
159
  - **Hardware Type:** 8 16GB V100
160
  - **Hours used:** 168 (1 week)