Edit model card

Visualize in Weights & Biases

phi-3-medium-LoRA

This model is a fine-tuned version of microsoft/Phi-3-medium-4k-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7006

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
1.2073 0.1118 2500 0.7942
1.1026 0.2237 5000 0.7623
1.0828 0.3355 7500 0.7447
1.0777 0.4473 10000 0.7363
1.0761 0.5592 12500 0.7304
1.0603 0.6710 15000 0.7243
1.0689 0.7828 17500 0.7208
1.0685 0.8947 20000 0.7183
1.0543 1.0065 22500 0.7158
1.0412 1.1183 25000 0.7126
1.0488 1.2301 27500 0.7109
1.0496 1.3420 30000 0.7101
1.0442 1.4538 32500 0.7088
1.0533 1.5656 35000 0.7079
1.0461 1.6775 37500 0.7060
1.0387 1.7893 40000 0.7059
1.0214 1.9011 42500 0.7045
1.0396 2.0130 45000 0.7053
1.0423 2.1248 47500 0.7044
1.0384 2.2366 50000 0.7039
1.0091 2.3485 52500 0.7041
1.0277 2.4603 55000 0.7040
1.0194 2.5721 57500 0.7039
1.0365 2.6840 60000 0.7034
1.0378 2.7958 62500 0.7021
1.0315 2.9076 65000 0.7025
1.0308 3.0195 67500 0.7021
1.0054 3.1313 70000 0.7022
1.0275 3.2431 72500 0.7027
1.024 3.3550 75000 0.7030
1.0199 3.4668 77500 0.7018
1.028 3.5786 80000 0.7021
1.0292 3.6904 82500 0.7017
1.017 3.8023 85000 0.7014
1.0202 3.9141 87500 0.7012
1.01 4.0259 90000 0.7021
1.0117 4.1378 92500 0.7015
1.0078 4.2496 95000 0.7010
1.0181 4.3614 97500 0.7013
1.0158 4.4733 100000 0.7015
1.0185 4.5851 102500 0.7015
1.0145 4.6969 105000 0.7010
1.006 4.8088 107500 0.7010
1.0099 4.9206 110000 0.7008
1.0273 5.0324 112500 0.7010
1.0081 5.1443 115000 0.7012
1.0084 5.2561 117500 0.7011
1.0088 5.3679 120000 0.7012
1.0021 5.4798 122500 0.7009
1.0211 5.5916 125000 0.7009
1.023 5.7034 127500 0.7006
1.0143 5.8153 130000 0.7008
1.0082 5.9271 132500 0.7007
1.0142 6.0389 135000 0.7009
1.0221 6.1507 137500 0.7009
1.0245 6.2626 140000 0.7009
1.0134 6.3744 142500 0.7008
1.0098 6.4862 145000 0.7007
1.0123 6.5981 147500 0.7005
1.0016 6.7099 150000 0.7005
1.0123 6.8217 152500 0.7006
1.0085 6.9336 155000 0.7005
1.0138 7.0454 157500 0.7003
1.006 7.1572 160000 0.7006
1.0087 7.2691 162500 0.7005
1.0152 7.3809 165000 0.7008
1.0129 7.4927 167500 0.7008
0.992 7.6046 170000 0.7001
0.9972 7.7164 172500 0.7004
1.0168 7.8282 175000 0.7007
1.0053 7.9401 177500 0.7005
0.9945 8.0519 180000 0.7004
1.0186 8.1637 182500 0.7006
1.0209 8.2756 185000 0.7006
1.013 8.3874 187500 0.7006
1.0068 8.4992 190000 0.7006
0.9985 8.6110 192500 0.7005
1.0044 8.7229 195000 0.7005
1.0292 8.8347 197500 0.7005
1.0153 8.9465 200000 0.7004
1.0058 9.0584 202500 0.7005
0.9943 9.1702 205000 0.7005
1.015 9.2820 207500 0.7005
1.023 9.3939 210000 0.7006
1.0192 9.5057 212500 0.7005
1.0067 9.6175 215000 0.7005
1.0198 9.7294 217500 0.7006
0.988 9.8412 220000 0.7006
1.0098 9.9530 222500 0.7006

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Hmehdi515/phi-3-medium-LoRA

Adapter
(8)
this model