Makabaka commited on
Commit
8137fa1
1 Parent(s): 7eef981

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -22
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 1.6174
18
 
19
  ## Model description
20
 
@@ -40,32 +40,33 @@ The following hyperparameters were used during training:
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - num_epochs: 16
 
43
 
44
  ### Training results
45
 
46
- | Training Loss | Epoch | Step | Validation Loss |
47
- |:-------------:|:-----:|:-----:|:---------------:|
48
- | 2.5225 | 1.0 | 670 | 2.4071 |
49
- | 2.2459 | 2.0 | 1340 | 2.0490 |
50
- | 2.1137 | 3.0 | 2010 | 2.1236 |
51
- | 2.0192 | 4.0 | 2680 | 2.0374 |
52
- | 1.9307 | 5.0 | 3350 | 1.9619 |
53
- | 1.8619 | 6.0 | 4020 | 1.9072 |
54
- | 1.823 | 7.0 | 4690 | 1.8499 |
55
- | 1.7415 | 8.0 | 5360 | 1.7408 |
56
- | 1.6994 | 9.0 | 6030 | 1.7243 |
57
- | 1.6576 | 10.0 | 6700 | 1.7139 |
58
- | 1.6109 | 11.0 | 7370 | 1.8658 |
59
- | 1.593 | 12.0 | 8040 | 1.9678 |
60
- | 1.5501 | 13.0 | 8710 | 1.7578 |
61
- | 1.5288 | 14.0 | 9380 | 1.7830 |
62
- | 1.5135 | 15.0 | 10050 | 1.8932 |
63
- | 1.4906 | 16.0 | 10720 | 1.6174 |
64
 
65
 
66
  ### Framework versions
67
 
68
- - Transformers 4.19.4
69
- - Pytorch 1.11.0+cu113
70
- - Datasets 2.3.1
71
  - Tokenizers 0.12.1
 
14
 
15
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 1.2512
18
 
19
  ## Model description
20
 
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - num_epochs: 16
43
+ - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | 2.0978 | 1.0 | 291 | 1.7077 |
50
+ | 1.6399 | 2.0 | 582 | 1.4279 |
51
+ | 1.4845 | 3.0 | 873 | 1.3755 |
52
+ | 1.3998 | 4.0 | 1164 | 1.3708 |
53
+ | 1.3393 | 5.0 | 1455 | 1.1966 |
54
+ | 1.2848 | 6.0 | 1746 | 1.2787 |
55
+ | 1.238 | 7.0 | 2037 | 1.2609 |
56
+ | 1.1969 | 8.0 | 2328 | 1.2159 |
57
+ | 1.1648 | 9.0 | 2619 | 1.1781 |
58
+ | 1.1419 | 10.0 | 2910 | 1.2104 |
59
+ | 1.1296 | 11.0 | 3201 | 1.2160 |
60
+ | 1.1048 | 12.0 | 3492 | 1.1685 |
61
+ | 1.0803 | 13.0 | 3783 | 1.2540 |
62
+ | 1.0778 | 14.0 | 4074 | 1.1658 |
63
+ | 1.0641 | 15.0 | 4365 | 1.1269 |
64
+ | 1.0616 | 16.0 | 4656 | 1.2512 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.21.0
70
+ - Pytorch 1.12.0+cu113
71
+ - Datasets 2.4.0
72
  - Tokenizers 0.12.1