Edit model card

openhermes-mistral-dpo-gptq

This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8471
  • Rewards/chosen: -0.2589
  • Rewards/rejected: -0.1510
  • Rewards/accuracies: 0.375
  • Rewards/margins: -0.1079
  • Logps/rejected: -116.0277
  • Logps/chosen: -111.7328
  • Logits/rejected: -2.2331
  • Logits/chosen: -2.3546

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6755 0.1 10 0.7298 -0.0301 -0.0035 0.375 -0.0266 -114.5520 -109.4439 -2.2395 -2.3722
0.6379 0.2 20 0.7804 -0.1600 -0.1132 0.375 -0.0468 -115.6494 -110.7433 -2.2341 -2.3621
0.7061 0.3 30 0.8180 -0.2242 -0.1463 0.375 -0.0779 -115.9803 -111.3849 -2.2357 -2.3577
0.6503 0.4 40 0.8460 -0.2548 -0.1442 0.375 -0.1106 -115.9595 -111.6915 -2.2330 -2.3554
0.9618 0.5 50 0.8471 -0.2589 -0.1510 0.375 -0.1079 -116.0277 -111.7328 -2.2331 -2.3546

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.17.0
  • Tokenizers 0.15.2
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nolo99/openhermes-mistral-dpo-gptq