joanllop commited on
Commit
faa0669
1 Parent(s): 6f4bb7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -17
README.md CHANGED
@@ -1,15 +1,23 @@
1
  ---
 
2
  language:
 
3
  - es
 
4
  license: apache-2.0
 
5
  tags:
6
  - "national library of spain"
7
  - "spanish"
8
  - "bne"
 
 
9
  datasets:
10
- - "bne"
 
11
  metrics:
12
  - "ppl"
 
13
  widget:
14
  - text: "Este año las campanadas de La Sexta las presentará <mask>."
15
  - text: "David Broncano es un presentador de La <mask>."
@@ -20,10 +28,146 @@ widget:
20
 
21
  # RoBERTa base trained with data from National Library of Spain (BNE)
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ## Model Description
24
  RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
25
 
26
- ## Training corpora and preprocessing
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
28
 
29
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
@@ -34,13 +178,38 @@ Some of the statistics of the corpus:
34
  |---------|---------------------|------------------|-----------|
35
  | BNE | 201,080,084 | 135,733,450,668 | 570GB |
36
 
37
- ## Tokenization and pre-training
38
- The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- ## Evaluation and results
41
- For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
42
 
43
- ## Citing
 
 
 
 
44
  If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
45
  ```
46
  @article{,
@@ -61,28 +230,37 @@ Intelligence (SEDIA) within the framework of the Plan-TL.},
61
 
62
  ```
63
 
64
- ## Copyright
65
 
66
- Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
67
 
68
- ## Licensing information
69
 
70
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
71
 
72
- ## Funding
73
 
74
- This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
75
- ## Disclaimer
 
 
 
 
 
 
 
 
76
 
77
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
78
 
79
- When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
80
 
81
- In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
82
 
83
 
84
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
85
 
86
  Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
87
 
88
- En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
 
 
1
  ---
2
+
3
  language:
4
+
5
  - es
6
+
7
  license: apache-2.0
8
+
9
  tags:
10
  - "national library of spain"
11
  - "spanish"
12
  - "bne"
13
+ - "roberta-base-bne"
14
+
15
  datasets:
16
+ - "bne"
17
+
18
  metrics:
19
  - "ppl"
20
+
21
  widget:
22
  - text: "Este año las campanadas de La Sexta las presentará <mask>."
23
  - text: "David Broncano es un presentador de La <mask>."
 
28
 
29
  # RoBERTa base trained with data from National Library of Spain (BNE)
30
 
31
+ ## Table of Contents
32
+ <details>
33
+ <summary>Click to expand</summary>
34
+
35
+ - [Overview](#overview)
36
+ - [Model Description](#model-description)
37
+ - [How to Use](#how-to-use)
38
+ - [Intended Uses and Limitations](#intended-uses-and-limitations)
39
+ - [Training](#training)
40
+ - [Training Data](#training-data)
41
+ - [Training Procedure](#training-procedure)
42
+ - [Evaluation](#evaluation)
43
+ - [Evaluation Results](#evaluation-results)
44
+ - [Additional Information](#additional-information)
45
+ - [Authors](#authors)
46
+ - [Citation Information](#citation-information)
47
+ - [Contact Information](#contact-information)
48
+ - [Funding](#funding)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Copyright](#copyright)
51
+ - [Disclaimer](#disclaimer)
52
+
53
+ </details>
54
+
55
+ ## Overview
56
+ - **Architecture:** roberta-base
57
+ - **Language:** Spanish
58
+ - **Task:** fill-mask
59
+ - **Data:** BNE
60
+
61
  ## Model Description
62
  RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
63
 
64
+ ## How to Use
65
+ You can use this model directly with a pipeline for fill mask. Since the generation relies on some randomness, we set a seed for reproducibility:
66
+
67
+ ```python
68
+ >>> from transformers import pipeline
69
+ >>> from pprint import pprint
70
+ >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
71
+ >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."))
72
+ [{'score': 0.08422081917524338,
73
+ 'token': 3832,
74
+ 'token_str': ' desarrollar',
75
+ 'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'},
76
+ {'score': 0.06348305940628052,
77
+ 'token': 3078,
78
+ 'token_str': ' crear',
79
+ 'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'},
80
+ {'score': 0.06148449331521988,
81
+ 'token': 2171,
82
+ 'token_str': ' realizar',
83
+ 'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'},
84
+ {'score': 0.056218471378088,
85
+ 'token': 10880,
86
+ 'token_str': ' elaborar',
87
+ 'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'},
88
+ {'score': 0.05133328214287758,
89
+ 'token': 31915,
90
+ 'token_str': ' validar',
91
+ 'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}]
92
+
93
+ ```
94
+
95
+ Here is how to use this model to get the features of a given text in PyTorch:
96
+
97
+ ```python
98
+ >>> from transformers import RobertaTokenizer, RobertaModel
99
+ >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
100
+ >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
101
+ >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje."
102
+ >>> encoded_input = tokenizer(text, return_tensors='pt')
103
+ >>> output = model(**encoded_input)
104
+ >>> print(output.last_hidden_state.shape)
105
+ torch.Size([1, 19, 768])
106
+ ```
107
+
108
+ ## Intended Uses and Limitations
109
+
110
+ You can use the raw model for fill mask or fine-tune it to a downstream task.
111
+
112
+ The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
113
+ unfiltered content from the internet, which is far from neutral. Here's an example of how the model can have biased predictions:
114
+
115
+ ```python
116
+ >>> from transformers import pipeline, set_seed
117
+ >>> from pprint import pprint
118
+ >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
119
+ >>> set_seed(42)
120
+ >>> pprint(unmasker("Antonio está pensando en <mask>."))
121
+ [{'score': 0.07950365543365479,
122
+ 'sequence': 'Antonio está pensando en ti.',
123
+ 'token': 486,
124
+ 'token_str': ' ti'},
125
+ {'score': 0.03375273942947388,
126
+ 'sequence': 'Antonio está pensando en irse.',
127
+ 'token': 13134,
128
+ 'token_str': ' irse'},
129
+ {'score': 0.031026942655444145,
130
+ 'sequence': 'Antonio está pensando en casarse.',
131
+ 'token': 24852,
132
+ 'token_str': ' casarse'},
133
+ {'score': 0.030703715980052948,
134
+ 'sequence': 'Antonio está pensando en todo.',
135
+ 'token': 665,
136
+ 'token_str': ' todo'},
137
+ {'score': 0.02838558703660965,
138
+ 'sequence': 'Antonio está pensando en ello.',
139
+ 'token': 1577,
140
+ 'token_str': ' ello'}]
141
+
142
+ >>> set_seed(42)
143
+ >>> pprint(unmasker("Mohammed está pensando en <mask>."))
144
+ [{'score': 0.05433618649840355,
145
+ 'sequence': 'Mohammed está pensando en morir.',
146
+ 'token': 9459,
147
+ 'token_str': ' morir'},
148
+ {'score': 0.0400255024433136,
149
+ 'sequence': 'Mohammed está pensando en irse.',
150
+ 'token': 13134,
151
+ 'token_str': ' irse'},
152
+ {'score': 0.03705748915672302,
153
+ 'sequence': 'Mohammed está pensando en todo.',
154
+ 'token': 665,
155
+ 'token_str': ' todo'},
156
+ {'score': 0.03658654913306236,
157
+ 'sequence': 'Mohammed está pensando en quedarse.',
158
+ 'token': 9331,
159
+ 'token_str': ' quedarse'},
160
+ {'score': 0.03329474478960037,
161
+ 'sequence': 'Mohammed está pensando en ello.',
162
+ 'token': 1577,
163
+ 'token_str': ' ello'}]
164
+
165
+ ```
166
+
167
+ ## Training
168
+
169
+ ### Training Data
170
+
171
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
172
 
173
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
 
178
  |---------|---------------------|------------------|-----------|
179
  | BNE | 201,080,084 | 135,733,450,668 | 570GB |
180
 
181
+ ### Training Procedure
182
+ The configuration of the **RoBERTa-base-bne** model is as follows:
183
+ - RoBERTa-b: 12-layer, 768-hidden, 12-heads, 125M parameters.
184
+ The pretraining objective used for this architecture is masked language modeling without next sentence prediction.
185
+ The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens.
186
+ The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
187
+
188
+ ## Evaluation
189
+
190
+ ### Evaluation Results
191
+ When fine-tuned on downstream tasks, this model achieves the following results:
192
+ | Dataset | Metric | [**RoBERTa-b**](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) |
193
+ |--------------|----------|------------|
194
+ | MLDoc | F1 | 0.9664 |
195
+ | CoNLL-NERC | F1 | 0.8851 |
196
+ | CAPITEL-NERC | F1 | 0.8960 |
197
+ | PAWS-X | F1 | 0.9020 |
198
+ | UD-POS | F1 | 0.9907 |
199
+ | CAPITEL-POS | F1 | 0.9846 |
200
+ | SQAC | F1 | 0.7923 |
201
+ | STS | Combined | 0.8533 |
202
+ | XNLI | Accuracy | 0.8016 |
203
+
204
+ For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405).
205
 
206
+ ## Additional Information
 
207
 
208
+ ### Authors
209
+
210
+ The Text Mining Unit from Barcelona Supercomputing Center.
211
+
212
+ ### Citation Information
213
  If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
214
  ```
215
  @article{,
 
230
 
231
  ```
232
 
233
+ ### Contact Information
234
 
235
+ For further information, send an email to <plantl-gob-es@bsc.es>
236
 
237
+ ### Funding
238
 
239
+ This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
240
 
241
+ ### Licensing Information
242
 
243
+ This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
244
+
245
+ ### Copyright
246
+
247
+ Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
248
+
249
+ ### Disclaimer
250
+
251
+ <details>
252
+ <summary>Click to expand</summary>
253
 
254
  The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
255
 
256
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
257
 
258
+ In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
259
 
260
 
261
  Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
262
 
263
  Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
264
 
265
+ En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
266
+ </details>