Davlan's picture
add xlm-roberta-swahili
3aaad64
|
raw
history blame
2.43 kB

Hugging Face's logo

language: ha datasets:


xlm-roberta-base-finetuned-swahili

Model description

xlm-roberta-base-finetuned-swahili is a Swahili RoBERTa model obtained by fine-tuning xlm-roberta-base model on Swahili language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets.

Specifically, this model is a xlm-roberta-base model that was fine-tuned on Swahili corpus.

Intended uses & limitations

How to use

You can use this model with Transformers pipeline for masked token prediction.

>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
                    
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa', 
'score': 0.31642526388168335, 
'token': 10728, 
'token_str': 'Paris'}, 
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa', 
'score': 0.15753623843193054, 
'token': 57557, 
'token_str': 'Rwanda'}, 
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa', 
'score': 0.07211585342884064, 
'token': 57824, 
'token_str': 'Burundi'}, 
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa', 
'score': 0.029844321310520172, 
'token': 10688, 
'token_str': 'France'}, 
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa', 
'score': 0.0265930388122797, 
'token': 38052, 
'token_str': 'Senegal'}]

Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.

Training data

This model was fine-tuned on Swahili CC-100

Training procedure

This model was trained on a single NVIDIA V100 GPU

Eval results on Test set (F-score, average over 5 runs)

Dataset XLM-R F1 sw_roberta F1
MasakhaNER 87.37 89.74

BibTeX entry and citation info

By David Adelani