Kielak2 commited on
Commit
80a732a
1 Parent(s): 834acd0

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
@@ -11,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # calculator_model_test
13
 
14
- This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 0.6445
17
 
18
  ## Model description
19
 
@@ -33,57 +34,58 @@ More information needed
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.001
36
- - train_batch_size: 512
37
- - eval_batch_size: 512
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
  - num_epochs: 40
 
42
 
43
  ### Training results
44
 
45
  | Training Loss | Epoch | Step | Validation Loss |
46
  |:-------------:|:-----:|:----:|:---------------:|
47
- | 3.3594 | 1.0 | 6 | 2.6961 |
48
- | 2.338 | 2.0 | 12 | 1.9172 |
49
- | 1.7807 | 3.0 | 18 | 1.6504 |
50
- | 1.6413 | 4.0 | 24 | 1.5772 |
51
- | 1.5569 | 5.0 | 30 | 1.5075 |
52
- | 1.5204 | 6.0 | 36 | 1.5027 |
53
- | 1.4613 | 7.0 | 42 | 1.4240 |
54
- | 1.4149 | 8.0 | 48 | 1.3841 |
55
- | 1.4121 | 9.0 | 54 | 1.3384 |
56
- | 1.3282 | 10.0 | 60 | 1.2658 |
57
- | 1.3428 | 11.0 | 66 | 1.3187 |
58
- | 1.2754 | 12.0 | 72 | 1.2000 |
59
- | 1.2004 | 13.0 | 78 | 1.1383 |
60
- | 1.1374 | 14.0 | 84 | 1.1283 |
61
- | 1.1239 | 15.0 | 90 | 1.1534 |
62
- | 1.1362 | 16.0 | 96 | 1.0378 |
63
- | 1.0319 | 17.0 | 102 | 1.0088 |
64
- | 0.9973 | 18.0 | 108 | 0.9690 |
65
- | 0.9907 | 19.0 | 114 | 0.9688 |
66
- | 0.9369 | 20.0 | 120 | 0.8948 |
67
- | 0.9286 | 21.0 | 126 | 0.9302 |
68
- | 0.9444 | 22.0 | 132 | 1.0039 |
69
- | 0.9423 | 23.0 | 138 | 0.9451 |
70
- | 0.8952 | 24.0 | 144 | 0.8408 |
71
- | 0.8529 | 25.0 | 150 | 0.8326 |
72
- | 0.8326 | 26.0 | 156 | 0.8112 |
73
- | 0.8228 | 27.0 | 162 | 0.7828 |
74
- | 0.7914 | 28.0 | 168 | 0.7701 |
75
- | 0.7917 | 29.0 | 174 | 0.7489 |
76
- | 0.7663 | 30.0 | 180 | 0.7327 |
77
- | 0.7588 | 31.0 | 186 | 0.7069 |
78
- | 0.7347 | 32.0 | 192 | 0.7117 |
79
- | 0.7311 | 33.0 | 198 | 0.6902 |
80
- | 0.7303 | 34.0 | 204 | 0.6899 |
81
- | 0.7098 | 35.0 | 210 | 0.6822 |
82
- | 0.7147 | 36.0 | 216 | 0.6766 |
83
- | 0.7189 | 37.0 | 222 | 0.6559 |
84
- | 0.6973 | 38.0 | 228 | 0.6488 |
85
- | 0.6922 | 39.0 | 234 | 0.6454 |
86
- | 0.6808 | 40.0 | 240 | 0.6445 |
87
 
88
 
89
  ### Framework versions
 
1
  ---
2
+ base_model: Kielak2/calculator_model_test
3
  tags:
4
  - generated_from_trainer
5
  model-index:
 
12
 
13
  # calculator_model_test
14
 
15
+ This model is a fine-tuned version of [Kielak2/calculator_model_test](https://huggingface.co/Kielak2/calculator_model_test) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.1439
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.001
37
+ - train_batch_size: 64
38
+ - eval_batch_size: 64
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - num_epochs: 40
43
+ - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | 1.3404 | 1.0 | 41 | 0.9112 |
50
+ | 1.0195 | 2.0 | 82 | 1.0749 |
51
+ | 0.9524 | 3.0 | 123 | 0.9697 |
52
+ | 0.8765 | 4.0 | 164 | 0.7983 |
53
+ | 0.8274 | 5.0 | 205 | 0.9082 |
54
+ | 0.7727 | 6.0 | 246 | 0.7641 |
55
+ | 1.3801 | 7.0 | 287 | 0.7807 |
56
+ | 0.7733 | 8.0 | 328 | 0.8173 |
57
+ | 0.7062 | 9.0 | 369 | 0.6003 |
58
+ | 0.6671 | 10.0 | 410 | 0.7683 |
59
+ | 0.6935 | 11.0 | 451 | 0.6048 |
60
+ | 0.6598 | 12.0 | 492 | 0.6386 |
61
+ | 0.6553 | 13.0 | 533 | 0.5399 |
62
+ | 0.6033 | 14.0 | 574 | 0.5085 |
63
+ | 0.5972 | 15.0 | 615 | 0.5428 |
64
+ | 0.5928 | 16.0 | 656 | 0.5449 |
65
+ | 0.6432 | 17.0 | 697 | 0.5153 |
66
+ | 0.5887 | 18.0 | 738 | 0.4591 |
67
+ | 0.5011 | 19.0 | 779 | 0.4463 |
68
+ | 0.5117 | 20.0 | 820 | 0.4133 |
69
+ | 0.4846 | 21.0 | 861 | 0.5346 |
70
+ | 0.4815 | 22.0 | 902 | 0.3905 |
71
+ | 0.4375 | 23.0 | 943 | 0.3758 |
72
+ | 0.4313 | 24.0 | 984 | 0.3518 |
73
+ | 0.4049 | 25.0 | 1025 | 0.3904 |
74
+ | 0.4028 | 26.0 | 1066 | 0.2871 |
75
+ | 0.3749 | 27.0 | 1107 | 0.3456 |
76
+ | 0.3682 | 28.0 | 1148 | 0.3105 |
77
+ | 0.3442 | 29.0 | 1189 | 0.2684 |
78
+ | 0.3515 | 30.0 | 1230 | 0.2455 |
79
+ | 0.3199 | 31.0 | 1271 | 0.2793 |
80
+ | 0.3196 | 32.0 | 1312 | 0.2236 |
81
+ | 0.3139 | 33.0 | 1353 | 0.2613 |
82
+ | 0.2875 | 34.0 | 1394 | 0.2020 |
83
+ | 0.2639 | 35.0 | 1435 | 0.1783 |
84
+ | 0.261 | 36.0 | 1476 | 0.1987 |
85
+ | 0.2455 | 37.0 | 1517 | 0.1795 |
86
+ | 0.2355 | 38.0 | 1558 | 0.1632 |
87
+ | 0.228 | 39.0 | 1599 | 0.1480 |
88
+ | 0.2177 | 40.0 | 1640 | 0.1439 |
89
 
90
 
91
  ### Framework versions
config.json CHANGED
@@ -1,4 +1,5 @@
1
  {
 
2
  "architectures": [
3
  "EncoderDecoderModel"
4
  ],
 
1
  {
2
+ "_name_or_path": "Kielak2/calculator_model_test",
3
  "architectures": [
4
  "EncoderDecoderModel"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b70bdd722fb38c548b1388468609ae3f6dddd9cae750784d0fed65835698361a
3
  size 31205552
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b97567041561ef6d2029ea413b9bac3f6431f442eba5b41017ba49e30a7c08a5
3
  size 31205552
runs/Mar04_10-02-54_c60a5c456cbd/events.out.tfevents.1709546575.c60a5c456cbd.796.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd3c206efc21d86b7b4093cf462042afa1c67615892930f8d95be1e4f72a1631
3
+ size 28288
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:adf36858584937fd52f1150bf063f6ba24aa070c1274143aa89f2ac320f728ef
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e59a4b8d41f27a7dcf7361beba8089142b2dac8dc20d9a658a018a6d96e2820
3
  size 5112