Back to all models
fill-mask mask_token: <mask>
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

rdenadai/BR_BERTo rdenadai/BR_BERTo
last 30 days



Contributed by

rdenadai Rodolfo De Nadai
1 model

How to use this model directly from the πŸ€—/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("rdenadai/BR_BERTo") model = AutoModelForMaskedLM.from_pretrained("rdenadai/BR_BERTo")


Portuguese (Brazil) model for text inference.


Trained on a corpus of 6_993_330 sentences.

  • Vocab size: 150_000
  • RobertaForMaskedLM size : 512
  • Num train epochs: 3
  • Time to train: ~10days (on GCP with a Nvidia T4)

I follow the great tutorial from HuggingFace team:

How to train a new language model from scratch using Transformers and Tokenizers

More infor here: