mireiaplalis commited on
Commit
c0a8f05
1 Parent(s): f97081e

Training complete

Browse files
Files changed (1) hide show
  1. README.md +62 -93
README.md CHANGED
@@ -3,8 +3,6 @@ license: apache-2.0
3
  base_model: bert-base-cased
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - wnut_17
8
  metrics:
9
  - precision
10
  - recall
@@ -12,29 +10,7 @@ metrics:
12
  - accuracy
13
  model-index:
14
  - name: bert-finetuned-ner
15
- results:
16
- - task:
17
- name: Token Classification
18
- type: token-classification
19
- dataset:
20
- name: wnut_17
21
- type: wnut_17
22
- config: wnut_17
23
- split: test
24
- args: wnut_17
25
- metrics:
26
- - name: Precision
27
- type: precision
28
- value: 0.5180180180180181
29
- - name: Recall
30
- type: recall
31
- value: 0.31974050046339203
32
- - name: F1
33
- type: f1
34
- value: 0.39541547277936967
35
- - name: Accuracy
36
- type: accuracy
37
- value: 0.9357035175879397
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -42,67 +18,60 @@ should probably proofread and complete it, then remove this comment. -->
42
 
43
  # bert-finetuned-ner
44
 
45
- This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wnut_17 dataset.
46
  It achieves the following results on the evaluation set:
47
- - Loss: 0.4235
48
- - Precision: 0.5180
49
- - Recall: 0.3197
50
- - F1: 0.3954
51
- - Accuracy: 0.9357
52
- - Corporation Precision: 0.2222
53
- - Corporation Recall: 0.2121
54
- - Corporation F1: 0.2171
55
- - Creative-work Precision: 0.4462
56
- - Creative-work Recall: 0.2042
57
- - Creative-work F1: 0.2802
58
- - Group Precision: 0.4030
59
- - Group Recall: 0.1636
60
- - Group F1: 0.2328
61
- - Location Precision: 0.5161
62
- - Location Recall: 0.4267
63
- - Location F1: 0.4672
64
- - Person Precision: 0.7747
65
- - Person Recall: 0.4569
66
- - Person F1: 0.5748
67
- - Product Precision: 0.1596
68
- - Product Recall: 0.1181
69
- - Product F1: 0.1357
70
- - B-corporation Precision: 0.3696
71
- - B-corporation Recall: 0.2576
72
- - B-corporation F1: 0.3036
73
- - B-creative-work Precision: 0.75
74
- - B-creative-work Recall: 0.2535
75
- - B-creative-work F1: 0.3789
76
- - B-group Precision: 0.5
77
- - B-group Recall: 0.1636
78
- - B-group F1: 0.2466
79
- - B-location Precision: 0.6293
80
- - B-location Recall: 0.4867
81
- - B-location F1: 0.5489
82
- - B-person Precision: 0.8608
83
- - B-person Recall: 0.4755
84
- - B-person F1: 0.6126
85
- - B-product Precision: 0.4545
86
- - B-product Recall: 0.1969
87
- - B-product F1: 0.2747
88
- - I-corporation Precision: 0.3333
89
- - I-corporation Recall: 0.2727
90
- - I-corporation F1: 0.3
91
- - I-creative-work Precision: 0.4262
92
- - I-creative-work Recall: 0.2016
93
- - I-creative-work F1: 0.2737
94
- - I-group Precision: 0.3478
95
- - I-group Recall: 0.1416
96
- - I-group F1: 0.2013
97
- - I-location Precision: 0.5932
98
- - I-location Recall: 0.3684
99
- - I-location F1: 0.4545
100
- - I-person Precision: 0.7625
101
- - I-person Recall: 0.3631
102
- - I-person F1: 0.4919
103
- - I-product Precision: 0.2222
104
- - I-product Recall: 0.1488
105
- - I-product F1: 0.1782
106
 
107
  ## Model description
108
 
@@ -131,16 +100,16 @@ The following hyperparameters were used during training:
131
 
132
  ### Training results
133
 
134
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Corporation Precision | Corporation Recall | Corporation F1 | Creative-work Precision | Creative-work Recall | Creative-work F1 | Group Precision | Group Recall | Group F1 | Location Precision | Location Recall | Location F1 | Person Precision | Person Recall | Person F1 | Product Precision | Product Recall | Product F1 | B-corporation Precision | B-corporation Recall | B-corporation F1 | B-creative-work Precision | B-creative-work Recall | B-creative-work F1 | B-group Precision | B-group Recall | B-group F1 | B-location Precision | B-location Recall | B-location F1 | B-person Precision | B-person Recall | B-person F1 | B-product Precision | B-product Recall | B-product F1 | I-corporation Precision | I-corporation Recall | I-corporation F1 | I-creative-work Precision | I-creative-work Recall | I-creative-work F1 | I-group Precision | I-group Recall | I-group F1 | I-location Precision | I-location Recall | I-location F1 | I-person Precision | I-person Recall | I-person F1 | I-product Precision | I-product Recall | I-product F1 |
135
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:---------------------:|:------------------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|:----------------:|:-------------:|:---------:|:-----------------:|:--------------:|:----------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------:|:--------------:|:----------:|:--------------------:|:-----------------:|:-------------:|:------------------:|:---------------:|:-----------:|:-------------------:|:----------------:|:------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------:|:--------------:|:----------:|:--------------------:|:-----------------:|:-------------:|:------------------:|:---------------:|:-----------:|:-------------------:|:----------------:|:------------:|
136
- | No log | 1.0 | 425 | 0.3858 | 0.4406 | 0.2576 | 0.3251 | 0.9303 | 0.0741 | 0.0606 | 0.0667 | 0.0667 | 0.0141 | 0.0233 | 0.1458 | 0.0848 | 0.1073 | 0.3829 | 0.4467 | 0.4123 | 0.7235 | 0.4452 | 0.5512 | 0.0 | 0.0 | 0.0 | 0.2391 | 0.1667 | 0.1964 | 0.0 | 0.0 | 0.0 | 0.375 | 0.0909 | 0.1463 | 0.5137 | 0.5 | 0.5068 | 0.8675 | 0.4732 | 0.6124 | 0.0 | 0.0 | 0.0 | 0.1923 | 0.0909 | 0.1235 | 0.3 | 0.0698 | 0.1132 | 0.1447 | 0.0973 | 0.1164 | 0.3636 | 0.3789 | 0.3711 | 0.7184 | 0.3720 | 0.4902 | 0.0 | 0.0 | 0.0 |
137
- | 0.199 | 2.0 | 850 | 0.4265 | 0.5295 | 0.2743 | 0.3614 | 0.9326 | 0.1444 | 0.1970 | 0.1667 | 0.4583 | 0.1549 | 0.2316 | 0.4483 | 0.0788 | 0.1340 | 0.5263 | 0.4 | 0.4545 | 0.7839 | 0.4312 | 0.5564 | 0.0714 | 0.0236 | 0.0355 | 0.2969 | 0.2879 | 0.2923 | 0.7297 | 0.1901 | 0.3017 | 0.7368 | 0.0848 | 0.1522 | 0.6635 | 0.46 | 0.5433 | 0.8981 | 0.4522 | 0.6016 | 0.5 | 0.0630 | 0.1119 | 0.2090 | 0.2545 | 0.2295 | 0.5581 | 0.1860 | 0.2791 | 0.3 | 0.0531 | 0.0902 | 0.5536 | 0.3263 | 0.4106 | 0.7619 | 0.3333 | 0.4638 | 0.1538 | 0.0496 | 0.075 |
138
- | 0.0799 | 3.0 | 1275 | 0.4235 | 0.5180 | 0.3197 | 0.3954 | 0.9357 | 0.2222 | 0.2121 | 0.2171 | 0.4462 | 0.2042 | 0.2802 | 0.4030 | 0.1636 | 0.2328 | 0.5161 | 0.4267 | 0.4672 | 0.7747 | 0.4569 | 0.5748 | 0.1596 | 0.1181 | 0.1357 | 0.3696 | 0.2576 | 0.3036 | 0.75 | 0.2535 | 0.3789 | 0.5 | 0.1636 | 0.2466 | 0.6293 | 0.4867 | 0.5489 | 0.8608 | 0.4755 | 0.6126 | 0.4545 | 0.1969 | 0.2747 | 0.3333 | 0.2727 | 0.3 | 0.4262 | 0.2016 | 0.2737 | 0.3478 | 0.1416 | 0.2013 | 0.5932 | 0.3684 | 0.4545 | 0.7625 | 0.3631 | 0.4919 | 0.2222 | 0.1488 | 0.1782 |
139
 
140
 
141
  ### Framework versions
142
 
143
- - Transformers 4.34.1
144
  - Pytorch 2.1.0+cu118
145
- - Datasets 2.14.6
146
- - Tokenizers 0.14.1
 
3
  base_model: bert-base-cased
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - precision
8
  - recall
 
10
  - accuracy
11
  model-index:
12
  - name: bert-finetuned-ner
13
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  # bert-finetuned-ner
20
 
21
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2315
24
+ - Precision: 0.5909
25
+ - Recall: 0.6789
26
+ - F1: 0.6318
27
+ - Accuracy: 0.9259
28
+ - Adr Precision: 0.5587
29
+ - Adr Recall: 0.6872
30
+ - Adr F1: 0.6163
31
+ - Disease Precision: 0.05
32
+ - Disease Recall: 0.0312
33
+ - Disease F1: 0.0385
34
+ - Drug Precision: 0.8364
35
+ - Drug Recall: 0.9020
36
+ - Drug F1: 0.8679
37
+ - Finding Precision: 0.1389
38
+ - Finding Recall: 0.1724
39
+ - Finding F1: 0.1538
40
+ - Symptom Precision: 0.0
41
+ - Symptom Recall: 0.0
42
+ - Symptom F1: 0.0
43
+ - B-adr Precision: 0.7568
44
+ - B-adr Recall: 0.8279
45
+ - B-adr F1: 0.7907
46
+ - B-disease Precision: 0.5
47
+ - B-disease Recall: 0.0312
48
+ - B-disease F1: 0.0588
49
+ - B-drug Precision: 0.9194
50
+ - B-drug Recall: 0.9557
51
+ - B-drug F1: 0.9372
52
+ - B-finding Precision: 0.5417
53
+ - B-finding Recall: 0.4483
54
+ - B-finding F1: 0.4906
55
+ - B-symptom Precision: 0.0
56
+ - B-symptom Recall: 0.0
57
+ - B-symptom F1: 0.0
58
+ - I-adr Precision: 0.5747
59
+ - I-adr Recall: 0.6892
60
+ - I-adr F1: 0.6268
61
+ - I-disease Precision: 0.3684
62
+ - I-disease Recall: 0.2414
63
+ - I-disease F1: 0.2917
64
+ - I-drug Precision: 0.8732
65
+ - I-drug Recall: 0.9118
66
+ - I-drug F1: 0.8921
67
+ - I-finding Precision: 0.3043
68
+ - I-finding Recall: 0.2593
69
+ - I-finding F1: 0.2800
70
+ - I-symptom Precision: 0.0
71
+ - I-symptom Recall: 0.0
72
+ - I-symptom F1: 0.0
73
+ - Macro Avg F1: 0.4368
74
+ - Weighted Avg F1: 0.7182
 
 
 
 
 
 
 
75
 
76
  ## Model description
77
 
 
100
 
101
  ### Training results
102
 
103
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
104
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
105
+ | No log | 1.0 | 127 | 0.2637 | 0.5378 | 0.6338 | 0.5819 | 0.9129 | 0.4869 | 0.6451 | 0.5550 | 0.0 | 0.0 | 0.0 | 0.7828 | 0.8480 | 0.8141 | 0.125 | 0.0690 | 0.0889 | 0.0 | 0.0 | 0.0 | 0.7377 | 0.7746 | 0.7557 | 0.0 | 0.0 | 0.0 | 0.8927 | 0.9015 | 0.8971 | 1.0 | 0.0690 | 0.1290 | 0.0 | 0.0 | 0.0 | 0.4813 | 0.6362 | 0.5480 | 0.0 | 0.0 | 0.0 | 0.8719 | 0.8676 | 0.8698 | 0.1875 | 0.1111 | 0.1395 | 0.0 | 0.0 | 0.0 | 0.3339 | 0.6592 |
106
+ | No log | 2.0 | 254 | 0.2329 | 0.5826 | 0.6621 | 0.6198 | 0.9242 | 0.5455 | 0.6677 | 0.6004 | 0.0455 | 0.0312 | 0.0370 | 0.8326 | 0.9020 | 0.8659 | 0.0769 | 0.0690 | 0.0727 | 0.0 | 0.0 | 0.0 | 0.7555 | 0.8075 | 0.7806 | 1.0 | 0.0312 | 0.0606 | 0.9159 | 0.9655 | 0.9400 | 0.6 | 0.3103 | 0.4091 | 0.0 | 0.0 | 0.0 | 0.5677 | 0.6819 | 0.6196 | 0.2727 | 0.2069 | 0.2353 | 0.8846 | 0.9020 | 0.8932 | 0.2667 | 0.1481 | 0.1905 | 0.0 | 0.0 | 0.0 | 0.4129 | 0.7090 |
107
+ | No log | 3.0 | 381 | 0.2315 | 0.5909 | 0.6789 | 0.6318 | 0.9259 | 0.5587 | 0.6872 | 0.6163 | 0.05 | 0.0312 | 0.0385 | 0.8364 | 0.9020 | 0.8679 | 0.1389 | 0.1724 | 0.1538 | 0.0 | 0.0 | 0.0 | 0.7568 | 0.8279 | 0.7907 | 0.5 | 0.0312 | 0.0588 | 0.9194 | 0.9557 | 0.9372 | 0.5417 | 0.4483 | 0.4906 | 0.0 | 0.0 | 0.0 | 0.5747 | 0.6892 | 0.6268 | 0.3684 | 0.2414 | 0.2917 | 0.8732 | 0.9118 | 0.8921 | 0.3043 | 0.2593 | 0.2800 | 0.0 | 0.0 | 0.0 | 0.4368 | 0.7182 |
108
 
109
 
110
  ### Framework versions
111
 
112
+ - Transformers 4.35.2
113
  - Pytorch 2.1.0+cu118
114
+ - Datasets 2.15.0
115
+ - Tokenizers 0.15.0