Mistral-7B-v0.3_pct_ortho_r16

This model is a fine-tuned version of unsloth/mistral-7b-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0091

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.9566 0.0206 8 2.0118
2.0191 0.0413 16 1.9983
2.0779 0.0619 24 2.0212
2.0339 0.0825 32 2.0205
2.0429 0.1032 40 2.0132
2.0601 0.1238 48 2.0219
2.041 0.1445 56 2.0171
2.0602 0.1651 64 2.0230
2.0341 0.1857 72 2.0311
2.0378 0.2064 80 2.0319
2.0961 0.2270 88 2.0402
2.106 0.2476 96 2.0208
2.1219 0.2683 104 2.0328
2.0569 0.2889 112 2.0528
2.1062 0.3096 120 2.0355
2.0522 0.3302 128 2.0365
2.0631 0.3508 136 2.0300
2.1052 0.3715 144 2.0409
2.0875 0.3921 152 2.0454
2.0854 0.4127 160 2.0273
2.0533 0.4334 168 2.0529
2.1096 0.4540 176 2.0373
2.0288 0.4746 184 2.0289
2.1344 0.4953 192 2.0375
2.0952 0.5159 200 2.0445
2.0613 0.5366 208 2.0374
2.0441 0.5572 216 2.0225
2.0493 0.5778 224 2.0380
2.0568 0.5985 232 2.0219
2.0477 0.6191 240 2.0261
2.1065 0.6397 248 2.0310
2.0245 0.6604 256 2.0208
2.1013 0.6810 264 2.0270
2.0356 0.7017 272 2.0205
2.0815 0.7223 280 2.0117
2.0898 0.7429 288 2.0175
2.0529 0.7636 296 2.0171
2.0281 0.7842 304 2.0134
2.0473 0.8048 312 2.0150
2.0315 0.8255 320 2.0088
2.0215 0.8461 328 2.0071
2.0003 0.8667 336 2.0093
2.0561 0.8874 344 2.0136
2.0407 0.9080 352 2.0132
2.0257 0.9287 360 2.0105
2.0294 0.9493 368 2.0090
2.0321 0.9699 376 2.0089
2.0516 0.9906 384 2.0091

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for imdatta0/Mistral-7B-v0.3_pct_ortho_r16

Adapter
(38)
this model