asier-gutierrez commited on
Commit
08c824f
1 Parent(s): 345b205

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ widget:
21
 
22
  ---
23
 
24
- # Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset.
25
  RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
26
 
27
  Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
 
21
 
22
  ---
23
 
24
+ # Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
25
  RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
26
 
27
  Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne