Edit model card

calculator_model_test

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7830

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 512
  • eval_batch_size: 512
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss
3.427 1.0 6 2.7939
2.4109 2.0 12 2.0016
1.8545 3.0 18 1.7404
1.6739 4.0 24 1.6295
1.6169 5.0 30 1.5796
1.5292 6.0 36 1.5329
1.5561 7.0 42 1.5452
1.5104 8.0 48 1.4851
1.449 9.0 54 1.4911
1.4033 10.0 60 1.5385
1.596 11.0 66 1.4041
1.4016 12.0 72 1.4326
1.3652 13.0 78 1.4192
1.3674 14.0 84 1.3686
1.3247 15.0 90 1.3393
1.2677 16.0 96 1.2822
1.2501 17.0 102 1.4161
1.2504 18.0 108 1.2305
1.2036 19.0 114 1.2746
1.19 20.0 120 1.1341
1.1239 21.0 126 1.0830
1.0561 22.0 132 1.1285
1.1156 23.0 138 1.1040
1.0666 24.0 144 1.0544
1.0425 25.0 150 0.9969
1.0004 26.0 156 0.9607
0.9737 27.0 162 0.9564
0.9604 28.0 168 0.9418
0.9737 29.0 174 0.8994
0.9149 30.0 180 0.8790
0.9073 31.0 186 0.8659
0.896 32.0 192 0.8466
0.8752 33.0 198 0.8496
0.9161 34.0 204 0.8349
0.8831 35.0 210 0.8165
0.8352 36.0 216 0.8027
0.8305 37.0 222 0.7951
0.8242 38.0 228 0.7902
0.8416 39.0 234 0.7846
0.832 40.0 240 0.7830

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
5
Safetensors
Model size
7.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.