openhermes-mistral-dpo-gptq

This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5203
  • Rewards/chosen: 4.8953
  • Rewards/rejected: -6.8710
  • Rewards/accuracies: 0.875
  • Rewards/margins: 11.7663
  • Logps/rejected: -512.2221
  • Logps/chosen: -446.3599
  • Logits/rejected: -2.4853
  • Logits/chosen: -2.3998

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6384 0.025 10 0.9029 0.3196 -0.5976 0.8125 0.9171 -449.4874 -492.1170 -2.5249 -2.3642
0.0586 0.05 20 1.6495 2.8410 -4.2864 0.8125 7.1274 -486.3763 -466.9029 -2.5061 -2.3959
0.0116 0.075 30 1.5584 4.2892 -6.0162 0.8125 10.3053 -503.6734 -452.4211 -2.4984 -2.4059
1.7733 0.1 40 1.5091 4.7449 -6.6237 0.8125 11.3687 -509.7491 -447.8633 -2.4894 -2.4020
0.0 0.125 50 1.5203 4.8953 -6.8710 0.875 11.7663 -512.2221 -446.3599 -2.4853 -2.3998

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for gswap/openhermes-mistral-dpo-gptq