xlm-roberta-base-finetuned-luo is a Luo RoBERTa model obtained by fine-tuning xlm-roberta-base model on Luo language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a xlm-roberta-base model that was fine-tuned on Luo corpus.
You can use this model with Transformers pipeline for masked token prediction.
from transformers import pipeline unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-luo') unmasker("Obila ma Changamwe <mask> pedho achije angwen mag njore")
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
This model was fine-tuned on JW300
This model was trained on a single NVIDIA V100 GPU
|Dataset||XLM-R F1||luo_roberta F1|
By David Adelani
- Downloads last month