model documentation

#2
by nazneen - opened
Files changed (1) hide show
  1. README.md +198 -0
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text-generation
4
+
5
+ ---
6
+ # Model Card for gpt2-base-gedi-detoxification
7
+
8
+ # Model Details
9
+
10
+ ## Model Description
11
+
12
+ - **Developed by:** SkolkovoInstitute
13
+ - **Shared by [Optional]:** SkolkovoInstitute
14
+ - **Model type:** Text Generation
15
+ - **Language(s) (NLP):** More information needed
16
+ - **License:** More information needed
17
+ - **Related Models:**
18
+ - **Parent Model:** GPT-2
19
+ - **Resources for more information:**
20
+ - [Associated GeDI Paper](https://arxiv.org/pdf/2009.06367.pdf)
21
+ - [Blog Post](​​https://blog.salesforceairesearch.com/gedi/)
22
+
23
+ # Uses
24
+
25
+
26
+ ## Direct Use
27
+
28
+ This model can be used for the task of Text Generation or fine-tune it to a downstream task.
29
+
30
+ ## Downstream Use [Optional]
31
+
32
+ More information needed
33
+
34
+ ## Out-of-Scope Use
35
+
36
+ The model should not be used to intentionally create hostile or alienating environments for people.
37
+ OpenAI note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md)
38
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
39
+
40
+
41
+ # Bias, Risks, and Limitations
42
+ The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
43
+ unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
44
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
45
+
46
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
47
+ > that require the generated text to be true.
48
+ >
49
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
50
+ > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
51
+ > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
52
+ > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
53
+ > levels of caution around use cases that are sensitive to biases around human attributes.
54
+
55
+ See the [GPT-2 model card](https://huggingface.co/gpt2?text=My+name+is+Merve+and+my+favorite) for examples of how the model can have biased predictions
56
+
57
+ *The [GeDi Blog post](https://blog.salesforceairesearch.com/gedi/) notes*
58
+
59
+ >We use smaller language models as generative classifiers to guide generation from larger language models. We show that this method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.
60
+
61
+ ## Recommendations
62
+
63
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
64
+
65
+
66
+ # Training Details
67
+
68
+ ## Training Data
69
+
70
+ The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
71
+ pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
72
+ this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
73
+ 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
74
+ [here](https://github.com/openai/gpt-2/blob/master/domains.txt).
75
+
76
+
77
+ ## Training Procedure
78
+
79
+
80
+ ### Preprocessing
81
+
82
+ More information needed
83
+
84
+ ### Speeds, Sizes, Times
85
+
86
+ More information needed
87
+
88
+ # Evaluation
89
+
90
+
91
+ ## Testing Data, Factors & Metrics
92
+
93
+ ### Testing Data
94
+
95
+ More information needed
96
+
97
+ ### Factors
98
+
99
+
100
+ ### Metrics
101
+
102
+ More information needed
103
+ ## Results
104
+
105
+ The [GeDi Blog post](https://blog.salesforceairesearch.com/gedi/) notes the following results
106
+
107
+ | Model | Toxicity | Linguistic Quality |
108
+ |------------------|----------|---------------------|
109
+ | GPT-2 | 1.45 | 3.23 |
110
+ | GeDi-guided GPT2 | 1.17 | 3.44 |
111
+
112
+
113
+ # Model Examination
114
+
115
+ More information needed
116
+
117
+ # Environmental Impact
118
+
119
+
120
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
121
+
122
+ - **Hardware Type:** More information needed
123
+ - **Hours used:** More information needed
124
+ - **Cloud Provider:** More information needed
125
+ - **Compute Region:** More information needed
126
+ - **Carbon Emitted:** More information needed
127
+
128
+ # Technical Specifications [optional]
129
+
130
+ ## Model Architecture and Objective
131
+
132
+ More information needed
133
+
134
+ ## Compute Infrastructure
135
+
136
+ More information needed
137
+
138
+ ### Hardware
139
+
140
+ More information needed
141
+
142
+ ### Software
143
+ More information needed
144
+
145
+ # Citation
146
+
147
+
148
+ **BibTeX:**
149
+ ```
150
+ @article{radford2019language,
151
+ title={Language Models are Unsupervised Multitask Learners},
152
+ author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
153
+ year={2019}
154
+ }
155
+ ```
156
+
157
+ ```
158
+ @article{KrauseGeDi2020,
159
+ title={{GeDi: Generative Discriminator Guided Sequence Generation}},
160
+ author={Krause, Ben and Gotmare, Akhilesh Deepak and McCann, Bryan and Keskar, Nitish Shirish and Joty, Shafiq and Socher, Richard and Rajani, Nazneen Fatema},
161
+ journal={arXiv preprint arXiv:2009.06367},
162
+ year={2020}
163
+ ```
164
+
165
+
166
+ # Glossary [optional]
167
+ More information needed
168
+
169
+ # More Information [optional]
170
+
171
+ More information needed
172
+
173
+ # Model Card Authors [optional]
174
+
175
+
176
+ SkolkovoInstitute in collaboration with Ezi Ozoani and the Hugging Face team
177
+
178
+ # Model Card Contact
179
+
180
+ More information needed
181
+
182
+ # How to Get Started with the Model
183
+
184
+ Use the code below to get started with the model.
185
+
186
+ <details>
187
+ <summary> Click to expand </summary>
188
+
189
+ ```python
190
+ from transformers import AutoTokenizer, AutoModelForCausalLM
191
+
192
+ tokenizer = AutoTokenizer.from_pretrained("SkolkovoInstitute/gpt2-base-gedi-detoxification")
193
+
194
+ model = AutoModelForCausalLM.from_pretrained("SkolkovoInstitute/gpt2-base-gedi-detoxification")
195
+
196
+ ```
197
+ </details>
198
+