updating yoruba readme
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ You can use this model with Transformers *pipeline* for masked token prediction.
|
|
37 |
#### Limitations and bias
|
38 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
39 |
## Training data
|
40 |
-
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus(https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
|
41 |
|
42 |
## Training procedure
|
43 |
This model was trained on a single NVIDIA V100 GPU
|
37 |
#### Limitations and bias
|
38 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
39 |
## Training data
|
40 |
+
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
|
41 |
|
42 |
## Training procedure
|
43 |
This model was trained on a single NVIDIA V100 GPU
|