pabRomero commited on
Commit
a38233f
·
verified ·
1 Parent(s): ddad95f

Training complete

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.1039
23
- - Precision: 0.7571
24
- - Recall: 0.7522
25
- - F1: 0.7546
26
- - Accuracy: 0.9725
27
 
28
  ## Model description
29
 
@@ -43,24 +43,24 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0002
46
- - train_batch_size: 8
47
- - eval_batch_size: 8
48
  - seed: 42
49
- - gradient_accumulation_steps: 2
50
  - total_train_batch_size: 16
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - lr_scheduler_warmup_ratio: 0.1
54
  - num_epochs: 3
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
60
- |:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
61
- | 0.1134 | 0.9999 | 4956 | 0.1136 | 0.7459 | 0.7336 | 0.7397 | 0.9697 |
62
- | 0.1001 | 2.0 | 9913 | 0.1060 | 0.7442 | 0.7533 | 0.7487 | 0.9717 |
63
- | 0.1024 | 2.9997 | 14868 | 0.1039 | 0.7571 | 0.7522 | 0.7546 | 0.9725 |
64
 
65
 
66
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.1117
23
+ - Precision: 0.8051
24
+ - Recall: 0.7944
25
+ - F1: 0.7997
26
+ - Accuracy: 0.9702
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0002
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
  - seed: 42
49
+ - gradient_accumulation_steps: 4
50
  - total_train_batch_size: 16
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_ratio: 0.05
54
  - num_epochs: 3
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
60
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
61
+ | 0.135 | 0.9998 | 2351 | 0.1292 | 0.7596 | 0.7329 | 0.7460 | 0.9649 |
62
+ | 0.0863 | 2.0 | 4703 | 0.1222 | 0.8064 | 0.7631 | 0.7841 | 0.9690 |
63
+ | 0.0554 | 2.9994 | 7053 | 0.1117 | 0.8051 | 0.7944 | 0.7997 | 0.9702 |
64
 
65
 
66
  ### Framework versions