Davlan commited on
Commit
a99ef20
1 Parent(s): 8c6d20a

updating Readme

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -2,8 +2,7 @@ Hugging Face's logo
2
  ---
3
  language: yo
4
  datasets:
5
- - [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
6
- - [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3)
7
  ---
8
  # bert-base-multilingual-cased-finetuned-yoruba
9
  ## Model description
@@ -22,7 +21,7 @@ from transformers import pipeline
22
  #### Limitations and bias
23
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
24
  ## Training data
25
- This model was fine-tuned on on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
26
 
27
  ## Training procedure
28
  This model was trained on a single NVIDIA V100 GPU
2
  ---
3
  language: yo
4
  datasets:
5
+ - [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
 
6
  ---
7
  # bert-base-multilingual-cased-finetuned-yoruba
8
  ## Model description
21
  #### Limitations and bias
22
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
23
  ## Training data
24
+ This model was fine-tuned on Yorùbá corpus
25
 
26
  ## Training procedure
27
  This model was trained on a single NVIDIA V100 GPU