How to use this model directly from the
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-italian-uncased")
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT models 🎉
The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the OPUS corpora collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the OSCAR corpus. Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!
For results on downstream tasks like NER or PoS tagging, please refer to this repository.
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
All models are available on the Huggingface model hub.
For questions about our BERT models just open an issue here 🤗
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the Hugging Face team, it is possible to download both cased and uncased models from their S3 storage 🤗
attention_probs_dropout_prob: 0.1 ...
hidden_act: "gelu" ...
hidden_dropout_prob: 0.1 ...
hidden_size: 768 ...
initializer_range: 0.02 ...
intermediate_size: 3072 ...
max_position_embeddings: 512 ...
num_attention_heads: 12 ...
num_hidden_layers: 12 ...
type_vocab_size: 2 ...
vocab_size: 31102 ...