updating readme
Browse files
README.md
CHANGED
@@ -4,52 +4,53 @@ language: ha
|
|
4 |
datasets:
|
5 |
|
6 |
---
|
7 |
-
# bert-base-multilingual-cased-finetuned-
|
8 |
## Model description
|
9 |
-
**bert-base-multilingual-cased-finetuned-
|
10 |
|
11 |
-
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
```python
|
16 |
>>> from transformers import pipeline
|
17 |
-
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-
|
18 |
-
>>> unmasker("
|
19 |
|
20 |
-
[{'sequence':
|
21 |
-
'
|
22 |
-
'
|
23 |
-
'
|
24 |
-
'
|
25 |
-
|
26 |
-
'token':
|
27 |
-
'token_str': '
|
28 |
-
{'sequence': '
|
29 |
-
'
|
30 |
-
'
|
31 |
-
|
32 |
-
'
|
33 |
-
'
|
34 |
-
|
35 |
-
'
|
36 |
-
'
|
|
|
|
|
|
|
37 |
|
38 |
```
|
39 |
#### Limitations and bias
|
40 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
41 |
## Training data
|
42 |
-
This model was fine-tuned on [
|
43 |
|
44 |
## Training procedure
|
45 |
This model was trained on a single NVIDIA V100 GPU
|
46 |
|
47 |
## Eval results on Test set (F-score, average over 5 runs)
|
48 |
-
Dataset| mBERT F1 |
|
49 |
-|-|-
|
50 |
-
[
|
51 |
-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.65 |
|
52 |
-
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | 84.76 | 90.98
|
53 |
|
54 |
### BibTeX entry and citation info
|
55 |
By David Adelani
|
4 |
datasets:
|
5 |
|
6 |
---
|
7 |
+
# bert-base-multilingual-cased-finetuned-swahili
|
8 |
## Model description
|
9 |
+
**bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
|
10 |
|
11 |
+
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus.
|
12 |
## Intended uses & limitations
|
13 |
#### How to use
|
14 |
You can use this model with Transformers *pipeline* for masked token prediction.
|
15 |
```python
|
16 |
>>> from transformers import pipeline
|
17 |
+
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili')
|
18 |
+
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
|
19 |
|
20 |
+
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
|
21 |
+
'score': 0.31642526388168335,
|
22 |
+
'token': 10728,
|
23 |
+
'token_str': 'Paris'},
|
24 |
+
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa',
|
25 |
+
'score': 0.15753623843193054,
|
26 |
+
'token': 57557,
|
27 |
+
'token_str': 'Rwanda'},
|
28 |
+
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa',
|
29 |
+
'score': 0.07211585342884064,
|
30 |
+
'token': 57824,
|
31 |
+
'token_str': 'Burundi'},
|
32 |
+
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
|
33 |
+
'score': 0.029844321310520172,
|
34 |
+
'token': 10688,
|
35 |
+
'token_str': 'France'},
|
36 |
+
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa',
|
37 |
+
'score': 0.0265930388122797,
|
38 |
+
'token': 38052,
|
39 |
+
'token_str': 'Senegal'}]
|
40 |
|
41 |
```
|
42 |
#### Limitations and bias
|
43 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
44 |
## Training data
|
45 |
+
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
|
46 |
|
47 |
## Training procedure
|
48 |
This model was trained on a single NVIDIA V100 GPU
|
49 |
|
50 |
## Eval results on Test set (F-score, average over 5 runs)
|
51 |
+
Dataset| mBERT F1 | sw_bert F1
|
52 |
-|-|-
|
53 |
+
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 |
|
|
|
|
|
54 |
|
55 |
### BibTeX entry and citation info
|
56 |
By David Adelani
|