judithrosell commited on
Commit
c3a27a2
1 Parent(s): 059fbe2

End of training

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: medicalai/ClinicalBERT
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: CRAFT_ClinicalBERT_NER
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # CRAFT_ClinicalBERT_NER
14
+
15
+ This model is a fine-tuned version of [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1733
18
+ - Seqeval classification report: precision recall f1-score support
19
+
20
+ CHEBI 0.68 0.66 0.67 1365
21
+ CL 0.55 0.50 0.52 284
22
+ GGP 0.87 0.81 0.84 4632
23
+ GO 0.66 0.65 0.65 8852
24
+ SO 0.68 0.50 0.58 616
25
+ Taxon 0.81 0.73 0.77 986
26
+
27
+ micro avg 0.72 0.69 0.71 16735
28
+ macro avg 0.71 0.64 0.67 16735
29
+ weighted avg 0.73 0.69 0.71 16735
30
+
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 2e-05
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 2
54
+ - total_train_batch_size: 32
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: linear
57
+ - num_epochs: 3
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Seqeval classification report |
62
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
63
+ | No log | 1.0 | 347 | 0.1894 | precision recall f1-score support
64
+
65
+ CHEBI 0.64 0.56 0.60 1365
66
+ CL 0.53 0.35 0.42 284
67
+ GGP 0.84 0.77 0.81 4632
68
+ GO 0.60 0.61 0.60 8852
69
+ SO 0.53 0.46 0.49 616
70
+ Taxon 0.78 0.66 0.71 986
71
+
72
+ micro avg 0.68 0.64 0.66 16735
73
+ macro avg 0.65 0.57 0.61 16735
74
+ weighted avg 0.68 0.64 0.66 16735
75
+ |
76
+ | 0.2231 | 2.0 | 695 | 0.1740 | precision recall f1-score support
77
+
78
+ CHEBI 0.69 0.63 0.66 1365
79
+ CL 0.56 0.44 0.49 284
80
+ GGP 0.83 0.79 0.81 4632
81
+ GO 0.65 0.65 0.65 8852
82
+ SO 0.68 0.47 0.55 616
83
+ Taxon 0.81 0.72 0.76 986
84
+
85
+ micro avg 0.71 0.68 0.69 16735
86
+ macro avg 0.70 0.62 0.65 16735
87
+ weighted avg 0.71 0.68 0.69 16735
88
+ |
89
+ | 0.0813 | 3.0 | 1041 | 0.1733 | precision recall f1-score support
90
+
91
+ CHEBI 0.68 0.66 0.67 1365
92
+ CL 0.55 0.50 0.52 284
93
+ GGP 0.87 0.81 0.84 4632
94
+ GO 0.66 0.65 0.65 8852
95
+ SO 0.68 0.50 0.58 616
96
+ Taxon 0.81 0.73 0.77 986
97
+
98
+ micro avg 0.72 0.69 0.71 16735
99
+ macro avg 0.71 0.64 0.67 16735
100
+ weighted avg 0.73 0.69 0.71 16735
101
+ |
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.35.2
107
+ - Pytorch 2.1.0+cu121
108
+ - Datasets 2.15.0
109
+ - Tokenizers 0.15.0