Edit model card


  
    Model: BERT
    Lang: IT
  

Model description

This is a BERT [1] model for the Italian language, obtained using mBERT (bert-base-multilingual-cased) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in [2], computing document-level frequencies over the Wikipedia dataset)

The resulting model has 110M parameters, a vocabulary of 30.785 tokens, and a size of ~430 MB.

Quick usage

from transformers import BertTokenizerFast, BertModel

tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-base-italian-cased")
model = BertModel.from_pretrained("osiria/bert-base-italian-cased")

References

[1] https://arxiv.org/abs/1810.04805

[2] https://arxiv.org/abs/2010.05609

License

The model is released under Apache-2.0 license

Downloads last month
910
Safetensors
Model size
110M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including osiria/bert-base-italian-cased