Edit model card

nepbert

Model description

Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.

Intended uses & limitations

How to use

from transformers import pipeline

pipe = pipeline(
    "fill-mask",
    model="amitness/nepbert",
    tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))

Training data

The data was taken from the nepali language subset of CC-100 dataset.

Training procedure

The model was trained on Google Colab using 1x Tesla V100.

Downloads last month
12
Safetensors
Model size
83.5M params
Tensor type
I64
·
F32
·

Dataset used to train amitness/roberta-base-ne