asier-gutierrez commited on
Commit
4fe19fb
1 Parent(s): 055d2f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -15,3 +15,29 @@ metrics:
15
  - "f1"
16
 
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - "f1"
16
 
17
  ---
18
+
19
+ # Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
20
+ RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
21
+
22
+ Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
23
+
24
+ ## Dataset
25
+ The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
26
+
27
+ ## Evaluation and results
28
+ F1 Score: 0.8998
29
+
30
+ For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
31
+
32
+ ## Citing
33
+ Check out our paper for all the details: https://arxiv.org/abs/2107.07253
34
+ ```
35
+ @misc{gutierrezfandino2021spanish,
36
+ title={Spanish Language Models},
37
+ author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
38
+ year={2021},
39
+ eprint={2107.07253},
40
+ archivePrefix={arXiv},
41
+ primaryClass={cs.CL}
42
+ }
43
+ ```