juletxara commited on
Commit
3159fb5
1 Parent(s): 9013c52

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +290 -0
README.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ datasets:
4
+ - HiTZ/euscrawl
5
+ language:
6
+ - eu
7
+ metrics:
8
+ - perplexity
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # Model Card for GPT2 Eus Euscrawl
13
+
14
+ <!-- Provide a quick summary of what the model is/does. -->
15
+
16
+ Pretrained GPT2 small model (124M parameters) on Basque language using a causal language modeling (CLM) objective. The English version of GPT2 was introduced in
17
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
18
+ and first released at [this page](https://openai.com/blog/better-language-models/). The team releasing GPT-2 also wrote a
19
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model.
20
+
21
+ # Model Details
22
+
23
+ ## Model Description
24
+
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ GPT-2 is a transformers model pretrained on a very large corpus of Basque data in a self-supervised fashion. This
28
+ means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
29
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
30
+ it was trained to guess the next word in sentences.
31
+
32
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
33
+ shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
34
+ predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
35
+
36
+ This way, the model learns an inner representation of the English language that can then be used to extract features
37
+ useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
38
+ prompt.
39
+
40
+ This is the **smallest** version of GPT-2, with 124M parameters.
41
+
42
+ - **Developed by:** [github.com/juletx](https://github.com/juletx)
43
+ - **Model type:** GPT2
44
+ - **Language(s) (NLP):** Basque (eu)
45
+ - **License:** cc
46
+
47
+ ## Model Sources [optional]
48
+
49
+ <!-- Provide the basic links for the model. -->
50
+
51
+ - **Repository:** [github.com/juletx/phd](https://github.com/juletx/phd)
52
+ - **Paper [optional]:** [More Information Needed]
53
+ - **Demo [optional]:** [More Information Needed]
54
+
55
+ # Uses
56
+
57
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
58
+
59
+ ## Direct Use
60
+
61
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
62
+
63
+ You can use this model directly with a pipeline for text generation.
64
+
65
+ ## Downstream Use [optional]
66
+
67
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
68
+
69
+ You can also fine-tune it to a downstream task. See the
70
+ [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
71
+
72
+ ## Out-of-Scope Use
73
+
74
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
75
+
76
+ [More Information Needed]
77
+
78
+ # Bias, Risks, and Limitations
79
+
80
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
81
+
82
+ The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
83
+ unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
84
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
85
+
86
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
87
+ > that require the generated text to be true.
88
+ >
89
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
90
+ > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
91
+ > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
92
+ > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
93
+ > levels of caution around use cases that are sensitive to biases around human attributes.
94
+
95
+ Here's an example of how the model can have biased predictions:
96
+
97
+ ```python
98
+ >>> from transformers import pipeline, set_seed
99
+ >>> generator = pipeline('text-generation', model='gpt2')
100
+ >>> set_seed(42)
101
+ >>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
102
+
103
+ [{'generated_text': 'The White man worked as a mannequin for'},
104
+ {'generated_text': 'The White man worked as a maniser of the'},
105
+ {'generated_text': 'The White man worked as a bus conductor by day'},
106
+ {'generated_text': 'The White man worked as a plumber at the'},
107
+ {'generated_text': 'The White man worked as a journalist. He had'}]
108
+
109
+ >>> set_seed(42)
110
+ >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
111
+
112
+ [{'generated_text': 'The Black man worked as a man at a restaurant'},
113
+ {'generated_text': 'The Black man worked as a car salesman in a'},
114
+ {'generated_text': 'The Black man worked as a police sergeant at the'},
115
+ {'generated_text': 'The Black man worked as a man-eating monster'},
116
+ {'generated_text': 'The Black man worked as a slave, and was'}]
117
+ ```
118
+
119
+ This bias will also affect all fine-tuned versions of this model.
120
+
121
+ ## Recommendations
122
+
123
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
124
+
125
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
126
+
127
+ ## How to Get Started with the Model
128
+
129
+ Use the code below to get started with the model.
130
+
131
+ You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
132
+ set a seed for reproducibility:
133
+
134
+ ```python
135
+ >>> from transformers import pipeline, set_seed
136
+ >>> generator = pipeline('text-generation', model='gpt2')
137
+ >>> set_seed(42)
138
+ >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
139
+
140
+ [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
141
+ {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
142
+ {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
143
+ {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
144
+ {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
145
+ ```
146
+
147
+ Here is how to use this model to get the features of a given text in PyTorch:
148
+
149
+ ```python
150
+ from transformers import GPT2Tokenizer, GPT2Model
151
+ tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
152
+ model = GPT2Model.from_pretrained('gpt2')
153
+ text = "Replace me by any text you'd like."
154
+ encoded_input = tokenizer(text, return_tensors='pt')
155
+ output = model(**encoded_input)
156
+ ```
157
+
158
+ # Training Details
159
+
160
+ ## Training Data
161
+
162
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
163
+
164
+ EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents
165
+ and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to
166
+ extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to
167
+ general purpose approaches. [Dataset Card](https://huggingface.co/datasets/HiTZ/euscrawl)
168
+
169
+ ## Training Procedure
170
+
171
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
172
+
173
+ ### Preprocessing [optional]
174
+
175
+ The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
176
+ vocabulary size of 50,304. The inputs are sequences of 1024 consecutive tokens.
177
+
178
+ ### Training Hyperparameters
179
+
180
+ - **Training regime:** bf16 mixed precission <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
181
+
182
+ ### Speeds, Sizes, Times [optional]
183
+
184
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
185
+
186
+ [More Information Needed]
187
+
188
+ # Evaluation
189
+
190
+ <!-- This section describes the evaluation protocols and provides the results. -->
191
+
192
+ ## Testing Data, Factors & Metrics
193
+
194
+ ### Testing Data
195
+
196
+ <!-- This should link to a Data Card if possible. -->
197
+
198
+ [More Information Needed]
199
+
200
+ ### Factors
201
+
202
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
203
+
204
+ [More Information Needed]
205
+
206
+ ### Metrics
207
+
208
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
209
+
210
+ [More Information Needed]
211
+
212
+ ## Results
213
+
214
+ [More Information Needed]
215
+
216
+ ### Summary
217
+
218
+
219
+
220
+ # Model Examination [optional]
221
+
222
+ <!-- Relevant interpretability work for the model goes here -->
223
+
224
+ [More Information Needed]
225
+
226
+ # Environmental Impact
227
+
228
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
229
+
230
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
231
+
232
+ - **Hardware Type:** [More Information Needed]
233
+ - **Hours used:** [More Information Needed]
234
+ - **Cloud Provider:** [More Information Needed]
235
+ - **Compute Region:** [More Information Needed]
236
+ - **Carbon Emitted:** [More Information Needed]
237
+
238
+ # Technical Specifications [optional]
239
+
240
+ ## Model Architecture and Objective
241
+
242
+ [More Information Needed]
243
+
244
+ ## Compute Infrastructure
245
+
246
+ [More Information Needed]
247
+
248
+ ### Hardware
249
+
250
+ [More Information Needed]
251
+
252
+ ### Software
253
+
254
+ [More Information Needed]
255
+
256
+ # Citation [optional]
257
+
258
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
259
+
260
+ **BibTeX:**
261
+
262
+ ```bibtex
263
+ @article{radford2019language,
264
+ title={Language Models are Unsupervised Multitask Learners},
265
+ author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
266
+ year={2019}
267
+ }
268
+ ```
269
+
270
+ **APA:**
271
+
272
+ [More Information Needed]
273
+
274
+ # Glossary [optional]
275
+
276
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
277
+
278
+ [More Information Needed]
279
+
280
+ # More Information [optional]
281
+
282
+ [More Information Needed]
283
+
284
+ # Model Card Authors [optional]
285
+
286
+ [More Information Needed]
287
+
288
+ # Model Card Contact
289
+
290
+ [More Information Needed]