Brizape commited on
Commit
deac22d
1 Parent(s): 401b2ec

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: Variome_2e-05_0404_ES6_strict_tok
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # Variome_2e-05_0404_ES6_strict_tok
19
+
20
+ This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.0707
23
+ - Precision: 0.5783
24
+ - Recall: 0.4750
25
+ - F1: 0.5216
26
+ - Accuracy: 0.9852
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 2e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - training_steps: 2000
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | 1.1357 | 0.13 | 25 | 0.1875 | 0.0 | 0.0 | 0.0 | 0.9759 |
58
+ | 0.1839 | 0.26 | 50 | 0.1827 | 0.0 | 0.0 | 0.0 | 0.9759 |
59
+ | 0.1925 | 0.39 | 75 | 0.1841 | 0.0 | 0.0 | 0.0 | 0.9759 |
60
+ | 0.1804 | 0.52 | 100 | 0.1797 | 0.0 | 0.0 | 0.0 | 0.9759 |
61
+ | 0.1677 | 0.65 | 125 | 0.1727 | 0.0 | 0.0 | 0.0 | 0.9759 |
62
+ | 0.1486 | 0.79 | 150 | 0.1293 | 0.0 | 0.0 | 0.0 | 0.9759 |
63
+ | 0.1231 | 0.92 | 175 | 0.1203 | 0.1706 | 0.0758 | 0.1050 | 0.9766 |
64
+ | 0.1011 | 1.05 | 200 | 0.1162 | 0.1591 | 0.0403 | 0.0643 | 0.9766 |
65
+ | 0.1206 | 1.18 | 225 | 0.1142 | 0.2467 | 0.1420 | 0.1803 | 0.9770 |
66
+ | 0.1189 | 1.31 | 250 | 0.1085 | 0.2264 | 0.0921 | 0.1310 | 0.9778 |
67
+ | 0.1086 | 1.44 | 275 | 0.1015 | 0.25 | 0.1958 | 0.2196 | 0.9790 |
68
+ | 0.0977 | 1.57 | 300 | 0.0948 | 0.2849 | 0.2505 | 0.2666 | 0.9800 |
69
+ | 0.0901 | 1.7 | 325 | 0.0944 | 0.2966 | 0.2534 | 0.2733 | 0.9796 |
70
+ | 0.0888 | 1.83 | 350 | 0.0891 | 0.3162 | 0.2543 | 0.2819 | 0.9811 |
71
+ | 0.0724 | 1.96 | 375 | 0.0920 | 0.4200 | 0.2495 | 0.3131 | 0.9812 |
72
+ | 0.0773 | 2.09 | 400 | 0.0850 | 0.4561 | 0.3090 | 0.3684 | 0.9826 |
73
+ | 0.0679 | 2.23 | 425 | 0.0803 | 0.4373 | 0.3378 | 0.3812 | 0.9825 |
74
+ | 0.0809 | 2.36 | 450 | 0.0871 | 0.4580 | 0.2562 | 0.3286 | 0.9814 |
75
+ | 0.0667 | 2.49 | 475 | 0.0769 | 0.4281 | 0.3656 | 0.3944 | 0.9835 |
76
+ | 0.0731 | 2.62 | 500 | 0.0742 | 0.5111 | 0.3752 | 0.4328 | 0.9841 |
77
+ | 0.0713 | 2.75 | 525 | 0.0724 | 0.5571 | 0.4165 | 0.4767 | 0.9848 |
78
+ | 0.063 | 2.88 | 550 | 0.0706 | 0.5687 | 0.4367 | 0.4940 | 0.9849 |
79
+ | 0.0714 | 3.01 | 575 | 0.0733 | 0.5448 | 0.4319 | 0.4818 | 0.9848 |
80
+ | 0.0572 | 3.14 | 600 | 0.0707 | 0.5783 | 0.4750 | 0.5216 | 0.9852 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.27.4
86
+ - Pytorch 2.0.0+cu118
87
+ - Datasets 2.11.0
88
+ - Tokenizers 0.13.3