|
Hugging Face's logo |
|
--- |
|
language: yo |
|
datasets: |
|
- [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) |
|
--- |
|
# bert-base-multilingual-cased-finetuned-yoruba |
|
## Model description |
|
**bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets. |
|
|
|
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus. |
|
## Intended uses & limitations |
|
#### How to use |
|
You can use this model with Transformers *pipeline* for masked token prediction. |
|
|
|
#### Limitations and bias |
|
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. |
|
## Training data |
|
This model was fine-tuned on Yorùbá corpus |
|
|
|
## Training procedure |
|
This model was trained on a single NVIDIA V100 GPU |
|
|
|
## Eval results on Test set (F-score) |
|
Dataset|F1-score |
|
-|- |
|
Yoruba GV NER |75.34 |
|
MasakhaNER |80.82 |
|
BBC Yoruba |80.66 |
|
|
|
### BibTeX entry and citation info |
|
By David Adelani |
|
``` |
|
|
|
``` |
|
|
|
|
|
|