Hugging Face's logo
language: yo datasets:
- Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
bert-base-multilingual-cased-finetuned-yoruba
Model description
bert-base-multilingual-cased-finetuned-yoruba is a Yoruba BERT model obtained by fine-tuning bert-base-multilingual-cased model on Yorùbá language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Yorùbá corpus.
Intended uses & limitations
How to use
You can use this model with Transformers pipeline for masked token prediction.
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("")
model = AutoModelForTokenClassification.from_pretrained("")
nlp = pipeline("", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
Training data
This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset
Training procedure
This model was trained on a single NVIDIA V100 GPU
Eval results on Test set (F-score)
Dataset | F1-score |
---|
Yoruba GV NER |86.26 MasakhaNER |75.76 BBC Yoruba |91.75
BibTeX entry and citation info
By David Adelani