This is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
The model chosen for training is Roberta with the following specifications:
You can use this model directly with a pipeline for masked language modeling:
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline model = BertForMaskedLM.from_pretrained("keshan/SinhalaBERTo") tokenizer = BertTokenizer.from_pretrained("keshan/SinhalaBERTo") fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) fill_mask("මම ගෙදර <mask>.")