MRaczuk's picture
End of training
e65f933 verified
|
raw
history blame
3.2 kB
metadata
tags:
  - generated_from_trainer
model-index:
  - name: calculator_model_test
    results: []

calculator_model_test

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0657

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 512
  • eval_batch_size: 512
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss
3.3733 1.0 6 2.7350
2.3614 2.0 12 1.9851
1.8438 3.0 18 1.7312
1.6535 4.0 24 1.6088
1.556 5.0 30 1.5621
1.5454 6.0 36 1.5615
1.5344 7.0 42 1.5509
1.5163 8.0 48 1.5509
1.5795 9.0 54 1.5492
1.519 10.0 60 1.5516
1.5289 11.0 66 1.5473
1.5264 12.0 72 1.5323
1.5076 13.0 78 1.5388
1.5205 14.0 84 1.5447
1.5105 15.0 90 1.5423
1.498 16.0 96 1.4954
1.4573 17.0 102 1.5017
1.4452 18.0 108 1.5393
1.4651 19.0 114 1.5054
1.4528 20.0 120 1.4949
1.45 21.0 126 1.6039
1.4467 22.0 132 1.4438
1.4036 23.0 138 1.4237
1.3715 24.0 144 1.3951
1.3506 25.0 150 1.3801
1.3416 26.0 156 1.3267
1.3117 27.0 162 1.3218
1.276 28.0 168 1.2940
1.2426 29.0 174 1.2387
1.2274 30.0 180 1.2127
1.2014 31.0 186 1.1931
1.1727 32.0 192 1.1761
1.1874 33.0 198 1.1635
1.1474 34.0 204 1.1329
1.1365 35.0 210 1.1083
1.138 36.0 216 1.0891
1.1015 37.0 222 1.0833
1.0828 38.0 228 1.0779
1.0941 39.0 234 1.0689
1.0738 40.0 240 1.0657

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2