nazneen commited on
Commit
60ec6e4
1 Parent(s): aef170a

model documentation

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for gpt2-base-gedi-detoxification
2
+
3
+ # Model Details
4
+
5
+ ## Model Description
6
+
7
+ - **Developed by:** SkolkovoInstitute
8
+ - **Shared by [Optional]:** SkolkovoInstitute
9
+ - **Model type:** Text Generation
10
+ - **Language(s) (NLP):** More information needed
11
+ - **License:** More information needed
12
+ - **Related Models:**
13
+ - **Parent Model:** GPT-2
14
+ - **Resources for more information:**
15
+ - [Associated GeDI Paper](https://arxiv.org/pdf/2009.06367.pdf)
16
+ - [Blog Post](​​https://blog.salesforceairesearch.com/gedi/)
17
+
18
+ # Uses
19
+
20
+
21
+ ## Direct Use
22
+
23
+ This model can be used for the task of Text Generation or fine-tune it to a downstream task.
24
+
25
+ ## Downstream Use [Optional]
26
+
27
+ More information needed
28
+
29
+ ## Out-of-Scope Use
30
+
31
+ The model should not be used to intentionally create hostile or alienating environments for people.
32
+ OpenAI note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md)
33
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
34
+
35
+
36
+ # Bias, Risks, and Limitations
37
+ The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
38
+ unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
39
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
40
+
41
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
42
+ > that require the generated text to be true.
43
+ >
44
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
45
+ > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
46
+ > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
47
+ > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
48
+ > levels of caution around use cases that are sensitive to biases around human attributes.
49
+
50
+ See the [GPT-2 model card](https://huggingface.co/gpt2?text=My+name+is+Merve+and+my+favorite) for examples of how the model can have biased predictions
51
+
52
+ *The [GeDi Blog post](https://blog.salesforceairesearch.com/gedi/) notes*
53
+
54
+ >We use smaller language models as generative classifiers to guide generation from larger language models. We show that this method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.
55
+
56
+ ## Recommendations
57
+
58
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
59
+
60
+
61
+ # Training Details
62
+
63
+ ## Training Data
64
+
65
+ The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
66
+ pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
67
+ this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
68
+ 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
69
+ [here](https://github.com/openai/gpt-2/blob/master/domains.txt).
70
+
71
+
72
+ ## Training Procedure
73
+
74
+
75
+ ### Preprocessing
76
+
77
+ More information needed
78
+
79
+ ### Speeds, Sizes, Times
80
+
81
+ More information needed
82
+
83
+ # Evaluation
84
+
85
+
86
+ ## Testing Data, Factors & Metrics
87
+
88
+ ### Testing Data
89
+
90
+ More information needed
91
+
92
+ ### Factors
93
+
94
+
95
+ ### Metrics
96
+
97
+ More information needed
98
+ ## Results
99
+
100
+ The [GeDi Blog post](https://blog.salesforceairesearch.com/gedi/) notes the following results
101
+
102
+ | Model | Toxicity | Linguistic Quality |
103
+ |------------------|----------|---------------------|
104
+ | GPT-2 | 1.45 | 3.23 |
105
+ | GeDi-guided GPT2 | 1.17 | 3.44 |
106
+
107
+
108
+ # Model Examination
109
+
110
+ More information needed
111
+
112
+ # Environmental Impact
113
+
114
+
115
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
116
+
117
+ - **Hardware Type:** More information needed
118
+ - **Hours used:** More information needed
119
+ - **Cloud Provider:** More information needed
120
+ - **Compute Region:** More information needed
121
+ - **Carbon Emitted:** More information needed
122
+
123
+ # Technical Specifications [optional]
124
+
125
+ ## Model Architecture and Objective
126
+
127
+ More information needed
128
+
129
+ ## Compute Infrastructure
130
+
131
+ More information needed
132
+
133
+ ### Hardware
134
+
135
+ More information needed
136
+
137
+ ### Software
138
+ More information needed
139
+
140
+ # Citation
141
+
142
+
143
+ **BibTeX:**
144
+ ```
145
+ @article{radford2019language,
146
+ title={Language Models are Unsupervised Multitask Learners},
147
+ author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
148
+ year={2019}
149
+ }
150
+ ```
151
+
152
+ ```
153
+ @article{KrauseGeDi2020,
154
+ title={{GeDi: Generative Discriminator Guided Sequence Generation}},
155
+ author={Krause, Ben and Gotmare, Akhilesh Deepak and McCann, Bryan and Keskar, Nitish Shirish and Joty, Shafiq and Socher, Richard and Rajani, Nazneen Fatema},
156
+ journal={arXiv preprint arXiv:2009.06367},
157
+ year={2020}
158
+ ```
159
+
160
+
161
+ # Glossary [optional]
162
+ More information needed
163
+
164
+ # More Information [optional]
165
+
166
+ More information needed
167
+
168
+ # Model Card Authors [optional]
169
+
170
+
171
+ SkolkovoInstitute in collaboration with Ezi Ozoani and the Hugging Face team
172
+
173
+ # Model Card Contact
174
+
175
+ More information needed
176
+
177
+ # How to Get Started with the Model
178
+
179
+ Use the code below to get started with the model.
180
+
181
+ <details>
182
+ <summary> Click to expand </summary>
183
+ ```python
184
+ from transformers import AutoTokenizer, AutoModelForCausalLM
185
+
186
+ tokenizer = AutoTokenizer.from_pretrained("SkolkovoInstitute/gpt2-base-gedi-detoxification")
187
+
188
+ model = AutoModelForCausalLM.from_pretrained("SkolkovoInstitute/gpt2-base-gedi-detoxification")
189
+
190
+ ```
191
+ </details>
192
+