Davlan commited on
Commit
6d9e518
1 Parent(s): 20ac98d

updating readme

Browse files
Files changed (1) hide show
  1. README.md +26 -29
README.md CHANGED
@@ -1,57 +1,54 @@
1
  Hugging Face's logo
2
  ---
3
- language: ha
4
  datasets:
5
 
6
  ---
7
- # xlm-roberta-base-finetuned-swahili
8
  ## Model description
9
- **xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
10
 
11
- Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
12
  ## Intended uses & limitations
13
  #### How to use
14
  You can use this model with Transformers *pipeline* for masked token prediction.
15
  ```python
16
  >>> from transformers import pipeline
17
- >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
18
- >>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
19
 
20
- [{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
21
- 'score': 0.5077782273292542,
22
- 'token': 190096,
23
- 'token_str': 'Ufaransa'},
24
- {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
25
- 'score': 0.3657738268375397,
26
- 'token': 7270,
27
- 'token_str': 'Paris'},
28
- {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
29
- 'score': 0.01592041552066803,
30
- 'token': 176392,
31
- 'token_str': 'Gabon'},
32
- {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
33
- 'score': 0.010881908237934113,
34
- 'token': 9942,
35
- 'token_str': 'France'},
36
- {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
37
- 'score': 0.009554869495332241,
38
- 'token': 185918,
39
- 'token_str': 'Marseille'}]
40
 
41
 
42
  ```
43
  #### Limitations and bias
44
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
45
  ## Training data
46
- This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
47
 
48
  ## Training procedure
49
  This model was trained on a single NVIDIA V100 GPU
50
 
51
  ## Eval results on Test set (F-score, average over 5 runs)
52
- Dataset| XLM-R F1 | sw_roberta F1
53
  -|-|-
54
- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.37 | 89.74
 
55
 
56
  ### BibTeX entry and citation info
57
  By David Adelani
1
  Hugging Face's logo
2
  ---
3
+ language: yo
4
  datasets:
5
 
6
  ---
7
+ # xlm-roberta-base-finetuned-yoruba
8
  ## Model description
9
+ **xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
10
 
11
+ Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus.
12
  ## Intended uses & limitations
13
  #### How to use
14
  You can use this model with Transformers *pipeline* for masked token prediction.
15
  ```python
16
  >>> from transformers import pipeline
17
+ >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba')
18
+ >>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
19
 
20
+ [{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173,
21
+ 'token': 44109,
22
+ 'token_str': '▁Queen'},
23
+ {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284,
24
+ 'token': 1350,
25
+ 'token_str': '▁ile'},
26
+ {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546,
27
+ 'token': 1053,
28
+ 'token_str': '▁ti'},
29
+ {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865,
30
+ 'token': 12878,
31
+ 'token_str': '▁baba'},
32
+ {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238,
33
+ 'token': 82879,
34
+ 'token_str': '▁Oba'}]
35
+
 
 
 
 
36
 
37
 
38
  ```
39
  #### Limitations and bias
40
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
41
  ## Training data
42
+ This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
43
 
44
  ## Training procedure
45
  This model was trained on a single NVIDIA V100 GPU
46
 
47
  ## Eval results on Test set (F-score, average over 5 runs)
48
+ Dataset| XLM-R F1 | yo_roberta F1
49
  -|-|-
50
+ [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66
51
+ [BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | |
52
 
53
  ### BibTeX entry and citation info
54
  By David Adelani