Davlan commited on
Commit
c68a564
1 Parent(s): 125c9c0

updating Readme

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -13,10 +13,26 @@ Specifically, this model is a *bert-base-multilingual-cased* model that was fine
13
  #### How to use
14
  You can use this model with Transformers *pipeline* for masked token prediction.
15
  ```python
16
- from transformers import pipeline
17
  >>> from transformers import pipeline
18
  >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
19
  >>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ```
21
  #### Limitations and bias
22
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
13
  #### How to use
14
  You can use this model with Transformers *pipeline* for masked token prediction.
15
  ```python
 
16
  >>> from transformers import pipeline
17
  >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
18
  >>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
19
+
20
+ [{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746,
21
+ 'token': 12176,
22
+ 'token_str': 'Mary'},
23
+ {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092,
24
+ 'token': 13704,
25
+ 'token_str': 'Queen'},
26
+ {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615,
27
+ 'token': 14382,
28
+ 'token_str': 'ti'},
29
+ {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525,
30
+ 'token': 11515,
31
+ 'token_str': 'King'},
32
+ {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962,
33
+ 'token': 14005,
34
+ 'token_str': 'Lady'}]
35
+
36
  ```
37
  #### Limitations and bias
38
  This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.