Michael Beukman commited on
Commit
29898c5
1 Parent(s): 539a1dc

Slight change to table

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -42,11 +42,11 @@ We do find large variation in transfer results when starting from different seed
42
  ## Model Structure
43
  Here are some performance details on this specific model, compared to others we trained.
44
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
45
- | Model Name | Staring point | Evaluation Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
46
- | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
47
- | [xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa) (This model) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | hau | 92.27 | 90.46 | 94.16 | 85.00 | 95.00 | 80.00 | 97.00 |
48
- | [xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | hau | 89.14 | 87.18 | 91.20 | 82.00 | 93.00 | 76.00 | 93.00 |
49
- | [xlm-roberta-base-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-hausa) | [base](https://huggingface.co/xlm-roberta-base) | hau | 89.94 | 87.74 | 92.25 | 84.00 | 94.00 | 74.00 | 93.00 |
50
  ## Usage
51
  To use these models, you can do the following, with just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
52
 
42
  ## Model Structure
43
  Here are some performance details on this specific model, compared to others we trained.
44
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
45
+ Model Name | Staring point | Evaluation Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER)
46
+ -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | --------------
47
+ [xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa) (This model) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | hau | 92.27 | 90.46 | 94.16 | 85.00 | 95.00 | 80.00 | 97.00
48
+ [xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | hau | 89.14 | 87.18 | 91.20 | 82.00 | 93.00 | 76.00 | 93.00
49
+ [xlm-roberta-base-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-hausa) | [base](https://huggingface.co/xlm-roberta-base) | hau | 89.94 | 87.74 | 92.25 | 84.00 | 94.00 | 74.00 | 93.00
50
  ## Usage
51
  To use these models, you can do the following, with just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
52