---
license: apache-2.0
language:
- it
widget:
- text: "Milano è una [MASK] dell'Italia"
example_title: "Example 1"
- text: "Giacomo Leopardi è stato uno dei più grandi [MASK] del classicismo italiano"
example_title: "Example 2"
- text: "La pizza è un piatto tipico della [MASK] gastronomica italiana"
example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------
Model: BERT
Lang: IT
--------------------------------------------------------------------------------------------------
Model description
This is a BERT [1] model for the Italian language, obtained using mBERT ([bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in [2], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 110M parameters, a vocabulary of 30.785 tokens, and a size of ~430 MB.
Quick usage
```python
from transformers import BertTokenizerFast, BertModel
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-base-italian-cased")
model = BertModel.from_pretrained("osiria/bert-base-italian-cased")
```
References
[1] https://arxiv.org/abs/1810.04805
[2] https://arxiv.org/abs/2010.05609
License
The model is released under Apache-2.0 license