Edit model card


  
    Model: RoBERTa
    Lang: IT
  

Model description

This is a RoBERTa [1] model for the Italian language, obtained using XLM-RoBERTa [2] (xlm-roberta-base) as a starting point and focusing it on the italian language by modifying the embedding layer (as in [3], computing document-level frequencies over the Wikipedia dataset)

The resulting model has 125M parameters, a vocabulary of 50.670 tokens, and a size of ~500 MB.

Quick usage

from transformers import RobertaTokenizerFast, RobertaModel

tokenizer = RobertaTokenizerFast.from_pretrained("osiria/roberta-base-italian")
model = RobertaModel.from_pretrained("osiria/roberta-base-italian")

References

[1] https://arxiv.org/abs/1907.11692

[2] https://arxiv.org/abs/1911.02116

[3] https://arxiv.org/abs/2010.05609

License

The model is released under MIT license

Downloads last month
135
Safetensors
Model size
125M params
Tensor type
I64
·
F32
·

Collection including osiria/roberta-base-italian