File size: 2,256 Bytes
d4e0443
 
 
717a412
d4e0443
717a412
 
 
d4e0443
717a412
d4e0443
717a412
 
 
 
 
7e57fb4
 
d4e0443
 
4995d55
d4e0443
e2f38ec
 
82fd9d5
e2f38ec
 
 
 
 
 
 
 
 
 
 
 
 
82fd9d5
 
d4e0443
eea2436
 
d4e0443
ab065ef
ea8982b
 
 
 
 
 
 
e2f38ec
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
language:
- es
license: "cc-by-4.0"
tags:
- "national library of spain"
- "spanish"
- "bne"
datasets:
- "bne"  
metrics:
- "ppl"
widget:
- text: "Este año las campanadas de La Sexta las presentará <mask>." 
- text: "David Broncano es un presentador de La <mask>."
- text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."
- text: "Hay base legal dentro del marco <mask> actual."

---

# RoBERTa base trained with data from National Library of Spain (BNE)

## Model Description
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa]() base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain from 2009 to 2019.

## Training corpora and preprocessing 
We cleaned 59TB of WARC files and we deduplicated them at computing node level. This resulted into 2TB of Spanish clean corpus. After that, we performed a global deduplication resulting into 570GB of text.

Some of the statistics of the corpus:

| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE     |         201,080,084 |  135,733,450,668 |     570GB |

## Tokenization and pre-training 
We trained a BBPE tokenizer with a size of 50,262 tokens. We used 10,000 documents for validation and we trained the model for 48 hours into 16 computing nodes with 4 Nvidia V100 GPUs per node.

## Evaluation and results
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).

## Citing 
Check out our paper for all the details: https://arxiv.org/abs/2107.07253

```
@misc{gutierrezfandino2021spanish,
      title={Spanish Language Models}, 
      author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
      year={2021},
      eprint={2107.07253},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```