asier-gutierrez commited on
Commit
dc84b7e
1 Parent(s): c7046a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -15,3 +15,30 @@ metrics:
15
  - "f1"
16
 
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - "f1"
16
 
17
  ---
18
+
19
+ # Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset.
20
+ RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
21
+
22
+ Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
23
+
24
+ ## Dataset
25
+ The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
26
+
27
+ ## Evaluation and results
28
+ F1 Score: 0.9851
29
+
30
+ For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
31
+
32
+
33
+ ## Citing
34
+ Check out our paper for all the details: https://arxiv.org/abs/2107.07253
35
+ ```
36
+ @misc{gutierrezfandino2021spanish,
37
+ title={Spanish Language Models},
38
+ author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
39
+ year={2021},
40
+ eprint={2107.07253},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CL}
43
+ }
44
+ ```