Back to all models
fill-mask mask_token: <mask>
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/rdenadai/BR_BERTo
Share Copied link to clipboard

Monthly model downloads

rdenadai/BR_BERTo rdenadai/BR_BERTo
69 downloads
last 30 days

pytorch

tf

Contributed by

rdenadai Rodolfo De Nadai
1 model

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("rdenadai/BR_BERTo") model = AutoModelWithLMHead.from_pretrained("rdenadai/BR_BERTo")

BR_BERTo

Portuguese (Brazil) model for text inference.

Params

Trained on a corpus of 5_258_624 sentences, with 132_807_374 non unique tokens (992_418 unique tokens).

  • Vocab size: 220_000
  • RobertaForMaskedLM size : 32
  • Num train epochs: 2
  • Time to train: ~23hs (on GCP with a Nvidia T4)

I follow the great tutorial from HuggingFace team:

How to train a new language model from scratch using Transformers and Tokenizers