metadata
license: mit
language:
- it
widget:
- text: Milano è una <mask> italiana
example_title: Example 1
- text: Leopardi è stato uno dei più grandi <mask> del classicismo italiano
example_title: Example 2
- text: L'Italia è uno <mask> dell'Unione Europea
example_title: Example 3
Model: RoBERTa
Lang: IT
Model description
This is a RoBERTa [1] model for the italian language, obtained using XLM-RoBERTa [2] (xlm-roberta-base) as a starting point and focusing it on the italian language by modifying the embedding layer (as in [3], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 125M parameters, a vocabulary of 50.670 tokens, and a size of ~500 MB.
Quick usage
from transformers import RobertaTokenizerFast, RobertaModel
tokenizer = RobertaTokenizerFast.from_pretrained("osiria/roberta-base-italian")
model = RobertaModel.from_pretrained("osiria/roberta-base-italian")
References
[1] https://arxiv.org/abs/1907.11692
[2] https://arxiv.org/abs/1911.02116
[3] https://arxiv.org/abs/2010.05609
License
The model is released under MIT license