Update README.md
Browse files
README.md
CHANGED
@@ -28,22 +28,21 @@ widget:
|
|
28 |
<summary>Click to expand</summary>
|
29 |
|
30 |
- [Overview](#overview)
|
31 |
-
- [Model
|
32 |
-
- [Intended
|
33 |
-
- [How to
|
34 |
- [Limitations and bias](#limitations-and-bias)
|
35 |
- [Training](#training)
|
36 |
-
- [Training
|
37 |
-
- [Training
|
38 |
- [Additional Information](#additional-information)
|
39 |
-
- [
|
|
|
40 |
- [Copyright](#copyright)
|
41 |
-
- [Licensing
|
42 |
- [Funding](#funding)
|
43 |
-
- [Citation Information](#citation-information)
|
44 |
-
- [Contributions](#contributions)
|
45 |
- [Disclaimer](#disclaimer)
|
46 |
-
|
47 |
</details>
|
48 |
|
49 |
## Overview
|
@@ -52,17 +51,15 @@ widget:
|
|
52 |
- **Task:** text-generation
|
53 |
- **Data:** BNE
|
54 |
|
55 |
-
## Model
|
56 |
**GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
|
57 |
|
58 |
-
|
59 |
-
## Intended Uses and Limitations
|
60 |
-
|
61 |
You can use the raw model for text generation or fine-tune it to a downstream task.
|
62 |
|
63 |
-
## How to
|
64 |
-
|
65 |
Here is how to use this model:
|
|
|
66 |
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
|
67 |
|
68 |
```python
|
@@ -94,7 +91,6 @@ torch.Size([1, 14, 1280])
|
|
94 |
```
|
95 |
|
96 |
## Limitations and bias
|
97 |
-
|
98 |
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
|
99 |
unfiltered content from the internet, which is far from neutral. Here's an example of how the model can have biased predictions:
|
100 |
|
@@ -118,13 +114,11 @@ unfiltered content from the internet, which is far from neutral. Here's an examp
|
|
118 |
{'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '},
|
119 |
{'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '},
|
120 |
{'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}]
|
121 |
-
|
122 |
```
|
123 |
|
124 |
## Training
|
125 |
|
126 |
-
### Training
|
127 |
-
|
128 |
The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
|
129 |
|
130 |
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
|
@@ -135,34 +129,37 @@ Some of the statistics of the corpus:
|
|
135 |
|---------|---------------------|------------------|-----------|
|
136 |
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
|
137 |
|
138 |
-
|
|
|
139 |
The pretraining objective used for this architecture is next token prediction.
|
140 |
The configuration of the **GPT2-large-bne** model is as follows:
|
|
|
141 |
- gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.
|
|
|
142 |
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens.
|
|
|
143 |
The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.
|
|
|
144 |
The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
|
145 |
|
146 |
-
## Additional
|
147 |
|
148 |
-
###
|
|
|
149 |
|
|
|
150 |
For further information, send an email to <plantl-gob-es@bsc.es>
|
151 |
|
152 |
### Copyright
|
153 |
-
|
154 |
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
|
155 |
|
156 |
-
### Licensing
|
157 |
-
|
158 |
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
159 |
|
160 |
### Funding
|
161 |
-
|
162 |
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
|
163 |
|
164 |
-
|
165 |
-
### Citation Information
|
166 |
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
|
167 |
```
|
168 |
@article{,
|
@@ -180,14 +177,8 @@ Intelligence (SEDIA) within the framework of the Plan-TL.},
|
|
180 |
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
|
181 |
year = {2022},
|
182 |
}
|
183 |
-
|
184 |
```
|
185 |
|
186 |
-
### Contributions
|
187 |
-
|
188 |
-
[N/A]
|
189 |
-
|
190 |
-
|
191 |
### Disclaimer
|
192 |
|
193 |
<details>
|
|
|
28 |
<summary>Click to expand</summary>
|
29 |
|
30 |
- [Overview](#overview)
|
31 |
+
- [Model description](#model-description)
|
32 |
+
- [Intended uses and limitations](#intended-use)
|
33 |
+
- [How to use](#how-to-use)
|
34 |
- [Limitations and bias](#limitations-and-bias)
|
35 |
- [Training](#training)
|
36 |
+
- [Training data](#training-data)
|
37 |
+
- [Training procedure](#training-procedure)
|
38 |
- [Additional Information](#additional-information)
|
39 |
+
- [Author](#author)
|
40 |
+
- [Contact information](#contact-information)
|
41 |
- [Copyright](#copyright)
|
42 |
+
- [Licensing information](#licensing-information)
|
43 |
- [Funding](#funding)
|
|
|
|
|
44 |
- [Disclaimer](#disclaimer)
|
45 |
+
|
46 |
</details>
|
47 |
|
48 |
## Overview
|
|
|
51 |
- **Task:** text-generation
|
52 |
- **Data:** BNE
|
53 |
|
54 |
+
## Model description
|
55 |
**GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
|
56 |
|
57 |
+
## Intended uses and limitations
|
|
|
|
|
58 |
You can use the raw model for text generation or fine-tune it to a downstream task.
|
59 |
|
60 |
+
## How to use
|
|
|
61 |
Here is how to use this model:
|
62 |
+
|
63 |
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
|
64 |
|
65 |
```python
|
|
|
91 |
```
|
92 |
|
93 |
## Limitations and bias
|
|
|
94 |
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
|
95 |
unfiltered content from the internet, which is far from neutral. Here's an example of how the model can have biased predictions:
|
96 |
|
|
|
114 |
{'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '},
|
115 |
{'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '},
|
116 |
{'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}]
|
|
|
117 |
```
|
118 |
|
119 |
## Training
|
120 |
|
121 |
+
### Training data
|
|
|
122 |
The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
|
123 |
|
124 |
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
|
|
|
129 |
|---------|---------------------|------------------|-----------|
|
130 |
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
|
131 |
|
132 |
+
|
133 |
+
### Training procedure
|
134 |
The pretraining objective used for this architecture is next token prediction.
|
135 |
The configuration of the **GPT2-large-bne** model is as follows:
|
136 |
+
|
137 |
- gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.
|
138 |
+
|
139 |
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens.
|
140 |
+
|
141 |
The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.
|
142 |
+
|
143 |
The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
|
144 |
|
145 |
+
## Additional information
|
146 |
|
147 |
+
### Author
|
148 |
+
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
|
149 |
|
150 |
+
### Contact information
|
151 |
For further information, send an email to <plantl-gob-es@bsc.es>
|
152 |
|
153 |
### Copyright
|
|
|
154 |
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
|
155 |
|
156 |
+
### Licensing information
|
|
|
157 |
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
158 |
|
159 |
### Funding
|
|
|
160 |
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
|
161 |
|
162 |
+
### Citation information
|
|
|
163 |
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
|
164 |
```
|
165 |
@article{,
|
|
|
177 |
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
|
178 |
year = {2022},
|
179 |
}
|
|
|
180 |
```
|
181 |
|
|
|
|
|
|
|
|
|
|
|
182 |
### Disclaimer
|
183 |
|
184 |
<details>
|