Michael Beukman commited on
Commit
912b00b
1 Parent(s): c33eaaf

Slightly improved model card

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -68,8 +68,24 @@ In general, this model performed worse on the 'date' category compared to others
68
  Here are some performance details on this specific model, compared to others we trained.
69
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
70
 
 
71
 
72
- | Model Name | Staring point | Evaluation Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
74
  | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) (This model) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
75
  | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
 
68
  Here are some performance details on this specific model, compared to others we trained.
69
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
70
 
71
+ These models can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
72
 
73
+
74
+ Abbreviation|Description
75
+ -|-
76
+ O|Outside of a named entity
77
+ B-DATE |Beginning of a DATE entity right after another DATE entity
78
+ I-DATE |DATE entity
79
+ B-PER |Beginning of a person’s name right after another person’s name
80
+ I-PER |Person’s name
81
+ B-ORG |Beginning of an organisation right after another organisation
82
+ I-ORG |Organisation
83
+ B-LOC |Beginning of a location right after another location
84
+ I-LOC |Location
85
+
86
+
87
+
88
+ | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
89
  | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
90
  | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) (This model) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
91
  | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |