joanllop commited on
Commit
ddf95b7
·
1 Parent(s): 6f23278

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -29
README.md CHANGED
@@ -27,29 +27,35 @@ widget:
27
  <details>
28
  <summary>Click to expand</summary>
29
 
 
30
  - [Model Description](#model-description)
 
31
  - [Intended Uses and Limitations](#intended-uses-and-limitations)
32
- - [How to Use](#how-to-use)
33
- - [Limitations and bias](#limitations-and-bias)
34
- - [Training corpora and preprocessing](#training-corpora-and-preprocessing)
35
- - [Tokenization and pre-training](#tokenization-and-pre-training)
36
- - [Citation Information](#citing)
37
- - [Licensing Information](#licensing-information)
38
- - [Copyright](#copyright)
39
- - [Funding](#funding)
40
- - [Disclaimer](#disclaimer)
 
41
 
42
  </details>
43
 
44
- ## Model Description
45
 
46
- **GPT2-base-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
 
 
 
47
 
48
- ## Intended Uses and Limitations
49
 
50
- You can use the raw model for text generation or fine-tune it to a downstream task.
51
 
52
- ### How to Use
53
 
54
  Here is how to use this model:
55
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
@@ -82,7 +88,9 @@ Here is how to use this model to get the features of a given text in PyTorch:
82
  torch.Size([1, 14, 768])
83
  ```
84
 
85
- ### Limitations and bias
 
 
86
 
87
  The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
88
  unfiltered content from the internet, which is far from neutral. Here's an example of how the model can have biased predictions:
@@ -110,7 +118,10 @@ unfiltered content from the internet, which is far from neutral. Here's an examp
110
 
111
  ```
112
 
113
- ## Training corpora and preprocessing
 
 
 
114
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
115
 
116
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
@@ -121,10 +132,21 @@ Some of the statistics of the corpus:
121
  |---------|---------------------|------------------|-----------|
122
  | BNE | 201,080,084 | 135,733,450,668 | 570GB |
123
 
124
- ## Tokenization and pre-training
125
- The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
 
 
 
 
 
 
 
126
 
127
- ## Citing
 
 
 
 
128
  If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
129
  ```
130
  @article{,
@@ -145,29 +167,37 @@ Intelligence (SEDIA) within the framework of the Plan-TL.},
145
 
146
  ```
147
 
148
- ## Licensing information
149
 
150
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
151
 
152
- ## Copyright
153
 
154
- Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
155
 
156
- ## Funding
157
 
158
- This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
 
 
159
 
160
- ## Disclaimer
 
 
 
 
 
161
 
162
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
163
 
164
- When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
165
 
166
- In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
167
 
168
 
169
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
170
 
171
  Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
172
 
173
- En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
 
 
27
  <details>
28
  <summary>Click to expand</summary>
29
 
30
+ - [Overview](#overview)
31
  - [Model Description](#model-description)
32
+ - [How to Use](#how-to-use)
33
  - [Intended Uses and Limitations](#intended-uses-and-limitations)
34
+ - [Training](#training)
35
+ - [Training Data](#training-data)
36
+ - [Training Procedure](#training-procedure)
37
+ - [Additional Information](#additional-information)
38
+ - [Authors](#authors)
39
+ - [Citation Information](#citation-information)
40
+ - [Contact Information](#contact-information)
41
+ - [Funding](#funding)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Disclaimer](#disclaimer)
44
 
45
  </details>
46
 
47
+ ## Overview
48
 
49
+ - **Architecture:** gpt2-base-bne
50
+ - **Language:** Spanish
51
+ - **Task:** text-generation
52
+ - **Data:** BNE
53
 
54
+ ## Model Description
55
 
56
+ **GPT2-base-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
57
 
58
+ ## How to Use
59
 
60
  Here is how to use this model:
61
  You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
 
88
  torch.Size([1, 14, 768])
89
  ```
90
 
91
+ ## Intended Uses and Limitations
92
+
93
+ You can use the raw model for text generation or fine-tune it to a downstream task.
94
 
95
  The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
96
  unfiltered content from the internet, which is far from neutral. Here's an example of how the model can have biased predictions:
 
118
 
119
  ```
120
 
121
+ ## Training
122
+
123
+ ### Training Data
124
+
125
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
126
 
127
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
 
132
  |---------|---------------------|------------------|-----------|
133
  | BNE | 201,080,084 | 135,733,450,668 | 570GB |
134
 
135
+ ### Training Procedure
136
+ The pretraining objective used for this architecture is next token prediction.
137
+ The configuration of the **GPT2-base-bne** model is as follows:
138
+ - gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters.
139
+ The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens.
140
+ The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.
141
+ The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
142
+
143
+ ## Additional Information
144
 
145
+ ### Authors
146
+
147
+ The Text Mining Unit from Barcelona Supercomputing Center.
148
+
149
+ ### Citation Information
150
  If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
151
  ```
152
  @article{,
 
167
 
168
  ```
169
 
170
+ ### Contact Information
171
 
172
+ For further information, send an email to <plantl-gob-es@bsc.es>
173
 
174
+ ### Funding
175
 
176
+ This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
177
 
178
+ ### Licensing information
179
 
180
+ This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
181
+
182
+ ### Copyright
183
 
184
+ Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
185
+
186
+ ### Disclaimer
187
+
188
+ <details>
189
+ <summary>Click to expand</summary>
190
 
191
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
192
 
193
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
194
 
195
+ In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
196
 
197
 
198
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
199
 
200
  Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
201
 
202
+ En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
203
+ </details>