Marissa commited on
Commit
5617a08
1 Parent(s): c8e24ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +175 -137
README.md CHANGED
@@ -1,202 +1,240 @@
1
  ---
2
  language:
3
- language: en
4
- license: mit
 
 
5
  ---
6
 
7
- # model-card-testing
8
 
9
- <details>
10
- <summary>Expand policymaker version</summary>
11
-
12
- *Pretend we decide upon set of info from the full model card that we think is relevant to policymakers, for example...*
13
-
14
- ## [MODEL NAME] Model Card for Policymakers
15
-
16
- ### Model Details
17
-
18
- This model was developed by XYZ and can be used for XYZ. It was introduced in this paper and the code is available here.
19
-
20
- ### Uses
21
-
22
- This model should be used for XYZ tasks and should not be used for XYZ tasks.
23
-
24
- ### Risks, Limitations and Bias
25
-
26
- Some of the risks associated with using this model include XYZ.
27
-
28
- </details>
29
 
30
- <details>
31
- <summary>Expand researcher version</summary>
32
-
33
- *Pretend we decide upon set of info from the full model card that we think is relevant to researchers and put that here...*
34
-
35
- </details>
36
 
37
  <details>
38
- <summary>Expand developer version</summary>
39
-
40
- *Pretend we decide upon set of info from the full model card that we think is relevant to developers and put that here...*
41
-
 
 
 
 
 
 
 
 
 
 
 
42
  </details>
43
 
44
- <details>
45
- <summary>Expand full version</summary>
46
 
47
- ## Table of Contents
48
  1. [Model Details](#model-details)
49
- 2. [How To Get Started With the Model](#how-to-get-started-with-the-model)
50
- 3. [Uses](#uses)
51
- 4. [Limitations](#limitations)
52
- 5. [Training](#training)
53
- 6. [Evaluation Results](#evaluation-results)
54
  7. [Environmental Impact](#environmental-impact)
55
- 8. [Citation Information](#citation-information)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- ## Model Details
58
 
59
- model-card-testing is a distilled language modelthat can be used for text generation. Users of this model card should also consider information about the design, training, and limitations of gpt2.
 
60
 
61
- - **Developed by:** author1, author2
62
- - **Model type:** testing type
63
- - **Language(s):** # not working right now
64
- - **License:** # not working right now
65
- - **Model Description:** testing description
66
- - **Related Models:**
67
- - **Parent Model**: gpt2
68
- - **Sibling Models**: TO DO (could we do this automatically somehow?)
69
 
70
 
71
- ## How to Get Started with the Model
72
 
73
- Use the code below to get started with the model. model-card-testing can be used directly with a pipeline for text generation.
74
- Since the generation relies on some randomness, we set a seed for reproducibility:
75
- ```python
76
- >>> from transformers import pipeline, set_seed
77
- >>> generator = pipeline('text-generation', model='model-card-testing')
78
- >>> set_seed(42)
79
- >>> generator("Hello, I'm a language model," max_length=20, num_return_sequences=5)
80
- ```
81
 
 
82
 
83
- Here is how to use this model to get the features of a given text in Pytorch:
84
 
85
- NOTE: This will need customization/fixing.
86
 
87
- ```python
88
- from transformers import GPT2Tokenizer, GPT2Model
89
- tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
90
- model = GPT2Model.from_pretrained('model-card-testing')
91
- text = "Replace me by any text you'd like."
92
- encoded_input = tokenizer(text, return_tensors='pt')
93
- output = model(**encoded_input)
94
- ```
95
 
96
- and in TensorFlow:
97
 
98
- NOTE: This will need customization/fixing.
99
 
100
- ```python
101
- from transformers import GPT2Tokenizer, TFGPT2Model
102
- tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
103
- model = TFGPT2Model.from_pretrained('model-card-testing')
104
- text = "Replace me by any text you'd like."
105
- encoded_input = tokenizer(text, return_tensors='tf')
106
- output = model(encoded_input)
107
- ```
108
 
109
- ## Uses
110
- LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
111
 
112
- #### Direct Use
113
- This model can be used for:
114
- - Text generation
115
- - Exploring characterisitics of language generated by a language model
116
- - Examples: Cloze tests, counterfactuals, generations with reframings
117
 
118
- #### Downstream Use
119
- Tasks that leverage language models, including:
120
- - Information Extraction, Question Answering, Summarization
121
- - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
122
- - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
123
- - Entertainment: Creation of games, chat bots, and amusing generations.
124
 
125
- #### Misuse and Out-of-scope Use
126
- Using the model in high-stakes settings is out of scope for this model. The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
127
 
128
- ## Limitations
129
 
130
- **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
131
 
132
- Significant research has explored bias and fairness issues with models for language generation (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). This model also has persistent bias issues, as highlighted in these demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
133
 
 
134
 
135
- The impact of model compression techniques, such as knowledge distillation, on bias and fairness issues associated with language models is an active area of research. For example:
136
- - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
137
- - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
138
- - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
139
 
 
140
 
 
141
 
 
 
 
142
 
143
- NOTE: This code will need customization/fixing.
144
 
 
145
 
146
- ```python
147
- >>> from transformers import pipeline, set_seed
148
- >>> generator = pipeline('text-generation', model='model-card-testing')
149
- >>> set_seed(48)
150
- >>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
151
 
152
- >>> set_seed(48)
153
- >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
154
- ```
155
 
 
156
 
157
 
 
158
 
159
- ## Training
160
 
161
- #### Training Data
162
 
163
- model-card-testing was trained using . See the data card for additional information.
164
 
165
- #### Training Procedure
166
 
167
- Preprocessing, hardware used, hyperparameters...
168
 
169
- ## Evaluation Results
170
 
171
- This model achieves the following results:
172
 
173
- NOTE: This will need customization.
174
 
 
175
 
176
- | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
177
- |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
178
- | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
179
- | | | | | | | | | | | |
180
 
 
181
 
 
182
 
 
 
 
 
 
183
 
184
- ## Environmental Impact
185
 
186
- You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)
187
 
188
- - **Hardware Type:**
189
- - **Hours used:**
190
- - **Cloud Provider:**
191
- - **Compute Region:**
192
- - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*:
193
 
194
- ## Citation Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
195
 
196
- ```bibtex
197
- @inproceedings{...,
198
- year={2020}
199
- }
200
- ```
201
 
202
  </details>
 
1
  ---
2
  language:
3
+ - en
4
+ - fr
5
+ - multilingual
6
+ license: mit
7
  ---
8
 
9
+ # Model Card for model-card-testing
10
 
11
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
12
+ This is a placeholder summary.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
 
 
 
 
 
 
14
 
15
  <details>
16
+ <summary> Click to expand policymaker version of model card </summary>
17
+
18
+ # Table of Contents
19
+
20
+ 1. [Model Details](#model-details)
21
+ 2. [Uses](#uses)
22
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
23
+ 4. [Model Examination](#model-examination)
24
+ 5. [Environmental Impact](#environmental-impact)
25
+ 6. [Citation](#citation)
26
+ 7. [Glossary](#glossary-optional)
27
+ 8. [More Information](#more-information-optional)
28
+ 9. [Model Card Authors](#model-card-authors-optional)
29
+ 10. [Model Card Contact](#model-card-contact)
30
+
31
  </details>
32
 
33
+ # Table of Contents
 
34
 
 
35
  1. [Model Details](#model-details)
36
+ 2. [Uses](#uses)
37
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
38
+ 4. [Training Details](#training-details)
39
+ 5. [Evaluation](#evaluation)
40
+ 6. [Model Examination](#model-examination)
41
  7. [Environmental Impact](#environmental-impact)
42
+ 8. [Technical Specifications](#technical-specifications-optional)
43
+ 9. [Citation](#citation)
44
+ 10. [Glossary](#glossary-optional)
45
+ 11. [More Information](#more-information-optional)
46
+ 12. [Model Card Authors](#model-card-authors-optional)
47
+ 13. [Model Card Contact](#model-card-contact)
48
+ 14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
49
+
50
+
51
+ # Model Details
52
+
53
+ ## Model Description
54
+
55
+ <!-- Provide a longer summary of what this model is/does. -->
56
+
57
+
58
+ - **Developed by:** More information needed
59
+ - **Shared by [Optional]:** More information needed
60
+ - **Model type:** Language model
61
+ - **Language(s) (NLP):** More information needed
62
+ - **License:** More information needed
63
+ - **Related Models:** fake_model1, fake_model2
64
+ - **Parent Model:** More information needed
65
+ - **Resources for more information:** More information needed
66
+
67
+ - [Associated Paper](https://huggingface.co)
68
+ - [Blog Post](https://huggingface.co)
69
+
70
+ # Uses
71
+
72
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
73
+
74
+ ## Direct Use
75
+
76
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
77
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
78
+
79
+ The model can be used for text generation.
80
+
81
+
82
+ ## Downstream Use [Optional]
83
+
84
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
85
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
86
+
87
+ To learn more about this task and potential downstream uses, see the Hugging Face [text generation docs](https://huggingface.co/tasks/text-generation)
88
+
89
 
90
+ ## Out-of-Scope Use
91
 
92
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
93
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
94
 
95
+ The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
 
 
 
 
 
 
 
96
 
97
 
98
+ # Bias, Risks, and Limitations
99
 
100
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
 
 
 
101
 
102
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
103
 
 
104
 
105
+ ## Recommendations
106
 
107
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
 
 
 
108
 
 
109
 
110
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
111
 
 
 
 
 
 
 
 
 
112
 
113
+ # Training Details
 
114
 
115
+ ## Training Data
 
 
 
 
116
 
117
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
 
 
118
 
119
+ More information on training data needed
 
120
 
 
121
 
122
+ ## Training Procedure
123
 
124
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
125
 
126
+ ### Preprocessing
127
 
128
+ More information needed
 
 
 
129
 
130
+ ### Speeds, Sizes, Times
131
 
132
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
133
 
134
+ More information needed
135
+
136
+ # Evaluation
137
 
138
+ <!-- This section describes the evaluation protocols and provides the results. -->
139
 
140
+ ## Testing Data, Factors & Metrics
141
 
142
+ ### Testing Data
 
 
 
 
143
 
144
+ <!-- This should link to a Data Card if possible. -->
 
 
145
 
146
+ More information needed
147
 
148
 
149
+ ### Factors
150
 
151
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
152
 
153
+ More information needed
154
 
155
+ ### Metrics
156
 
157
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
158
 
159
+ More information needed
160
 
161
+ ## Results
162
 
163
+ More information needed
164
 
165
+ # Model Examination
166
 
167
+ More information needed
168
 
169
+ # Environmental Impact
 
 
 
170
 
171
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
172
 
173
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
174
 
175
+ - **Hardware Type:** More information needed
176
+ - **Hours used:** More information needed
177
+ - **Cloud Provider:** More information needed
178
+ - **Compute Region:** More information needed
179
+ - **Carbon Emitted:** More information needed
180
 
181
+ # Technical Specifications [optional]
182
 
183
+ ## Model Architecture and Objective
184
 
185
+ More information needed
 
 
 
 
186
 
187
+ ## Compute Infrastructure
188
+
189
+ More information needed
190
+
191
+ ### Hardware
192
+
193
+ More information needed
194
+
195
+ ### Software
196
+
197
+ More information needed
198
+
199
+ # Citation
200
+
201
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
202
+
203
+ **BibTeX:**
204
+
205
+ More information needed
206
+
207
+ **APA:**
208
+
209
+ More information needed
210
+
211
+ # Glossary [optional]
212
+
213
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
214
+
215
+ More information needed
216
+
217
+ # More Information [optional]
218
+
219
+ More information needed
220
+
221
+ # Model Card Authors [optional]
222
+
223
+ <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
224
+
225
+ More information needed
226
+
227
+ # Model Card Contact
228
+
229
+ More information needed
230
+
231
+ # How to Get Started with the Model
232
+
233
+ Use the code below to get started with the model.
234
+
235
+ <details>
236
+ <summary> Click to expand </summary>
237
 
238
+ More information needed
 
 
 
 
239
 
240
  </details>