Back to all models
fill-mask mask_token: <mask>
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

iarfmoose/roberta-small-bulgarian iarfmoose/roberta-small-bulgarian
16 downloads
last 30 days

pytorch

tf

Contributed by

iarfmoose Adam Montgomerie
7 models

How to use this model directly from the πŸ€—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("iarfmoose/roberta-small-bulgarian") model = AutoModelForMaskedLM.from_pretrained("iarfmoose/roberta-small-bulgarian")

RoBERTa-small-bulgarian

The RoBERTa model was originally introduced in this paper. This is a smaller version of RoBERTa-base-bulgarian with only 6 hidden layers, but similar performance.

Intended uses

This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.

Limitations and bias

The training data is unfiltered text from the internet and may contain all sorts of biases.

Training data

This model was trained on the following data:

Training procedure

The model was pretrained using a masked language-modeling objective with dynamic masking as described here

It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.