openhermes-mistral-dpo-gptq

This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6599
  • Rewards/chosen: 0.0397
  • Rewards/rejected: -0.0752
  • Rewards/accuracies: 0.9375
  • Rewards/margins: 0.1149
  • Logps/rejected: -164.5962
  • Logps/chosen: -292.6904
  • Logits/rejected: -2.6901
  • Logits/chosen: -2.3670

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6819 0.01 10 0.6600 0.0491 -0.0050 1.0 0.0540 -163.8940 -292.5971 -2.6930 -2.3675
0.7106 0.01 20 0.6787 0.0460 0.0162 0.5625 0.0298 -163.6827 -292.6277 -2.6971 -2.3713
0.6487 0.01 30 0.6889 0.0454 -0.0002 0.8125 0.0456 -163.8460 -292.6334 -2.6960 -2.3700
0.5981 0.02 40 0.6718 0.0307 -0.0583 0.9375 0.0890 -164.4272 -292.7806 -2.6928 -2.3685
0.6573 0.03 50 0.6599 0.0397 -0.0752 0.9375 0.1149 -164.5962 -292.6904 -2.6901 -2.3670

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.17.0
  • Tokenizers 0.15.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Saahil1801/openhermes-mistral-dpo-gptq