temporary0-0name commited on
Commit
06b9939
1 Parent(s): 2d34fa2

End of training

Browse files
Files changed (1) hide show
  1. README.md +22 -22
README.md CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikitext dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.0165
21
 
22
  ## Model description
23
 
@@ -37,11 +37,11 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0003
40
- - train_batch_size: 64
41
- - eval_batch_size: 64
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 512
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_steps: 100
@@ -51,24 +51,24 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 8.562 | 0.55 | 50 | 6.9697 |
55
- | 6.63 | 1.1 | 100 | 6.3436 |
56
- | 5.938 | 1.65 | 150 | 5.1110 |
57
- | 3.0597 | 2.19 | 200 | 1.4150 |
58
- | 0.7989 | 2.74 | 250 | 0.3477 |
59
- | 0.2227 | 3.29 | 300 | 0.1284 |
60
- | 0.0925 | 3.84 | 350 | 0.0640 |
61
- | 0.0475 | 4.39 | 400 | 0.0412 |
62
- | 0.0314 | 4.94 | 450 | 0.0304 |
63
- | 0.0217 | 5.49 | 500 | 0.0246 |
64
- | 0.0181 | 6.04 | 550 | 0.0215 |
65
- | 0.0146 | 6.58 | 600 | 0.0194 |
66
- | 0.0132 | 7.13 | 650 | 0.0182 |
67
- | 0.012 | 7.68 | 700 | 0.0174 |
68
- | 0.0114 | 8.23 | 750 | 0.0169 |
69
- | 0.011 | 8.78 | 800 | 0.0167 |
70
- | 0.0108 | 9.33 | 850 | 0.0166 |
71
- | 0.0106 | 9.88 | 900 | 0.0165 |
72
 
73
 
74
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikitext dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.0107
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0003
40
+ - train_batch_size: 32
41
+ - eval_batch_size: 32
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 256
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_steps: 100
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 7.6252 | 0.55 | 100 | 6.4113 |
55
+ | 4.839 | 1.1 | 200 | 2.0385 |
56
+ | 0.9137 | 1.65 | 300 | 0.3108 |
57
+ | 0.171 | 2.2 | 400 | 0.0877 |
58
+ | 0.0542 | 2.75 | 500 | 0.0396 |
59
+ | 0.025 | 3.29 | 600 | 0.0242 |
60
+ | 0.0148 | 3.84 | 700 | 0.0180 |
61
+ | 0.0098 | 4.39 | 800 | 0.0148 |
62
+ | 0.0077 | 4.94 | 900 | 0.0130 |
63
+ | 0.006 | 5.49 | 1000 | 0.0121 |
64
+ | 0.0053 | 6.04 | 1100 | 0.0115 |
65
+ | 0.0045 | 6.59 | 1200 | 0.0112 |
66
+ | 0.0042 | 7.14 | 1300 | 0.0110 |
67
+ | 0.0039 | 7.69 | 1400 | 0.0109 |
68
+ | 0.0038 | 8.24 | 1500 | 0.0108 |
69
+ | 0.0037 | 8.79 | 1600 | 0.0107 |
70
+ | 0.0037 | 9.33 | 1700 | 0.0107 |
71
+ | 0.0036 | 9.88 | 1800 | 0.0107 |
72
 
73
 
74
  ### Framework versions