Edit model card

v2b_mistral_lora

This model is a fine-tuned version of peiyi9979/math-shepherd-mistral-7b-prm on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3066
  • Accuracy: 0.8603
  • Precision: 0.8713
  • Recall: 0.5889
  • F1: 0.7028

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 765837
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
No log 0 0 0.5994 0.7384 0.6545 0.1423 0.2338
0.572 0.0186 20 0.5946 0.7395 0.6731 0.1383 0.2295
0.5829 0.0371 40 0.5678 0.7494 0.7143 0.1779 0.2848
0.4808 0.0557 60 0.5129 0.7694 0.6744 0.3439 0.4555
0.498 0.0742 80 0.4658 0.7805 0.6708 0.4269 0.5217
0.2531 0.0928 100 0.4835 0.8016 0.7312 0.4625 0.5666
0.2925 0.1113 120 0.5003 0.8016 0.8304 0.3676 0.5096
0.1912 0.1299 140 0.4575 0.8004 0.8411 0.3557 0.5
0.1991 0.1484 160 0.4109 0.8115 0.8374 0.4071 0.5479
0.2153 0.1670 180 0.3718 0.8337 0.8456 0.4980 0.6269
0.1638 0.1855 200 0.3657 0.8237 0.8672 0.4387 0.5827
0.2033 0.2041 220 0.3455 0.8370 0.8354 0.5217 0.6423
0.2448 0.2226 240 0.3438 0.8381 0.8497 0.5138 0.6404
0.2337 0.2412 260 0.3705 0.8282 0.8828 0.4466 0.5932
0.1698 0.2597 280 0.3724 0.8215 0.8710 0.4269 0.5729
0.1607 0.2783 300 0.3455 0.8293 0.8722 0.4585 0.6010
0.1671 0.2968 320 0.3371 0.8337 0.8503 0.4941 0.625
0.1809 0.3154 340 0.3406 0.8514 0.8287 0.5929 0.6912
0.1672 0.3340 360 0.3520 0.8392 0.8699 0.5020 0.6366
0.153 0.3525 380 0.3273 0.8459 0.8562 0.5415 0.6634
0.2 0.3711 400 0.3307 0.8448 0.8599 0.5336 0.6585
0.2082 0.3896 420 0.3143 0.8603 0.8396 0.6206 0.7136
0.2051 0.4082 440 0.3139 0.8570 0.8563 0.5889 0.6979
0.0959 0.4267 460 0.3130 0.8570 0.8523 0.5929 0.6993
0.1955 0.4453 480 0.3044 0.8592 0.8462 0.6087 0.7080
0.1904 0.4638 500 0.3389 0.8404 0.8759 0.5020 0.6382
0.1809 0.4824 520 0.3319 0.8459 0.8701 0.5296 0.6585
0.1605 0.5009 540 0.3016 0.8614 0.8678 0.5968 0.7073
0.2123 0.5195 560 0.2983 0.8603 0.8396 0.6206 0.7136
0.2279 0.5380 580 0.3046 0.8559 0.8361 0.6047 0.7018
0.2224 0.5566 600 0.3395 0.8381 0.8741 0.4941 0.6313
0.1655 0.5751 620 0.3388 0.8359 0.8777 0.4822 0.6224
0.1468 0.5937 640 0.3022 0.8592 0.8424 0.6126 0.7094
0.1421 0.6122 660 0.3297 0.8437 0.8784 0.5138 0.6484
0.2483 0.6308 680 0.3060 0.8525 0.8529 0.5731 0.6856
0.1411 0.6494 700 0.3171 0.8481 0.8537 0.5534 0.6715
0.2015 0.6679 720 0.3120 0.8525 0.8614 0.5652 0.6826
0.2216 0.6865 740 0.3030 0.8503 0.8598 0.5573 0.6763
0.1936 0.7050 760 0.3091 0.8503 0.8598 0.5573 0.6763
0.135 0.7236 780 0.3023 0.8525 0.8529 0.5731 0.6856
0.1332 0.7421 800 0.3207 0.8437 0.8836 0.5099 0.6466
0.249 0.7607 820 0.3031 0.8592 0.8706 0.5850 0.6998
0.2033 0.7792 840 0.3076 0.8592 0.875 0.5810 0.6983
0.1418 0.7978 860 0.2998 0.8614 0.8678 0.5968 0.7073
0.1826 0.8163 880 0.3014 0.8625 0.8728 0.5968 0.7089
0.1538 0.8349 900 0.3092 0.8614 0.8855 0.5810 0.7017
0.1762 0.8534 920 0.3011 0.8603 0.8671 0.5929 0.7042
0.1561 0.8720 940 0.2998 0.8603 0.8671 0.5929 0.7042
0.1633 0.8905 960 0.3064 0.8570 0.8690 0.5771 0.6936
0.1452 0.9091 980 0.3034 0.8603 0.8713 0.5889 0.7028
0.086 0.9276 1000 0.3051 0.8581 0.8698 0.5810 0.6967
0.1909 0.9462 1020 0.3055 0.8581 0.8698 0.5810 0.6967
0.2017 0.9647 1040 0.3058 0.8581 0.8743 0.5771 0.6952
0.1828 0.9833 1060 0.3066 0.8603 0.8713 0.5889 0.7028

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
30
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mtzig/v2b_mistral_lora

Adapter
(13)
this model