blizrys commited on
Commit
f64d601
1 Parent(s): 8c25847

BATCH_SIZE=8

Browse files

LEARNING_RATE=1e-05
MAX_LENGTH=512
FOLD=0

README.md CHANGED
@@ -25,7 +25,7 @@ should probably proofread and complete it, then remove this comment. -->
25
 
26
  This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
27
  It achieves the following results on the evaluation set:
28
- - Loss: 1.5110
29
  - Accuracy: 0.7
30
 
31
  ## Model description
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 5e-05
49
  - train_batch_size: 8
50
  - eval_batch_size: 8
51
  - seed: 42
@@ -57,21 +57,21 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
60
- | No log | 1.0 | 57 | 0.8402 | 0.58 |
61
- | No log | 2.0 | 114 | 0.7937 | 0.6 |
62
- | No log | 3.0 | 171 | 0.8682 | 0.62 |
63
- | No log | 4.0 | 228 | 0.8930 | 0.64 |
64
- | No log | 5.0 | 285 | 1.4703 | 0.68 |
65
- | No log | 6.0 | 342 | 1.4524 | 0.66 |
66
- | No log | 7.0 | 399 | 1.7603 | 0.7 |
67
- | No log | 8.0 | 456 | 1.6109 | 0.68 |
68
- | 0.4346 | 9.0 | 513 | 1.4578 | 0.68 |
69
- | 0.4346 | 10.0 | 570 | 1.5110 | 0.7 |
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.10.2
75
  - Pytorch 1.9.0+cu102
76
- - Datasets 1.11.0
77
  - Tokenizers 0.10.3
 
25
 
26
  This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
27
  It achieves the following results on the evaluation set:
28
+ - Loss: 0.9821
29
  - Accuracy: 0.7
30
 
31
  ## Model description
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 1e-05
49
  - train_batch_size: 8
50
  - eval_batch_size: 8
51
  - seed: 42
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
60
+ | No log | 1.0 | 57 | 0.9446 | 0.56 |
61
+ | No log | 2.0 | 114 | 0.9137 | 0.62 |
62
+ | No log | 3.0 | 171 | 0.8600 | 0.64 |
63
+ | No log | 4.0 | 228 | 0.9188 | 0.64 |
64
+ | No log | 5.0 | 285 | 0.9344 | 0.66 |
65
+ | No log | 6.0 | 342 | 0.9054 | 0.68 |
66
+ | No log | 7.0 | 399 | 0.9405 | 0.66 |
67
+ | No log | 8.0 | 456 | 0.9729 | 0.68 |
68
+ | 0.5861 | 9.0 | 513 | 0.9837 | 0.7 |
69
+ | 0.5861 | 10.0 | 570 | 0.9821 | 0.7 |
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.10.2
75
  - Pytorch 1.9.0+cu102
76
+ - Datasets 1.12.0
77
  - Tokenizers 0.10.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c3d21402e40ce3cebc507fc99c95853fdda09461cf31e9f1be17ef3985495888
3
  size 438022317
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff041b7fd35af2be36489e5aaca6125f178486b64a0ae5df247a5c56c5d99f1b
3
  size 438022317
runs/Sep13_22-34-33_7c61ed7a38f2/1631572669.982941/events.out.tfevents.1631572669.7c61ed7a38f2.89.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa03dcb11e2414761df149e7e6825318caee4ac13c7e44b6a41f1bfcd98ace29
3
+ size 4427
runs/Sep13_22-34-33_7c61ed7a38f2/events.out.tfevents.1631572669.7c61ed7a38f2.89.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db7986bec1e31d957e74dce06402e433b81c739fb26ec424cd21de66b6323f4
3
+ size 7058
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:53a277a414524b89ce446fa00944cc01dc7af195746c8a894136bca013181f92
3
- size 2735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80babb417950a1a6372256ea2c2c3fa6d414ffd3459d9e772616759df17b6bb8
3
+ size 2799