zephyr-7b-dpo-full / README.md
wzhouad's picture
Model save
f20d900 verified
|
raw
history blame
No virus
3.49 kB
metadata
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full
    results: []

zephyr-7b-dpo-full

This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4920
  • Rewards/chosen: -2.3074
  • Rewards/rejected: -3.5196
  • Rewards/accuracies: 0.7734
  • Rewards/margins: 1.2122
  • Logps/rejected: -609.3139
  • Logps/chosen: -487.7755
  • Logits/rejected: -0.7242
  • Logits/chosen: -0.9597

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 2
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5392 0.11 100 0.6286 -0.6554 -0.9418 0.6523 0.2865 -351.5352 -322.5750 -2.5756 -2.5908
0.4524 0.23 200 0.5475 -1.4831 -2.1698 0.7227 0.6867 -474.3327 -405.3454 -1.9678 -1.9878
0.3976 0.34 300 0.5194 -1.8541 -2.8790 0.7617 1.0249 -545.2501 -442.4474 -0.9783 -1.1841
0.3892 0.45 400 0.5160 -2.0795 -3.1766 0.7773 1.0971 -575.0087 -464.9888 -0.6002 -0.8579
0.3964 0.57 500 0.4992 -2.1896 -3.3081 0.7656 1.1185 -588.1666 -476.0038 -0.8012 -1.0189
0.4149 0.68 600 0.4948 -2.2061 -3.3241 0.7461 1.1179 -589.7601 -477.6525 -1.0527 -1.2398
0.4004 0.79 700 0.4905 -2.1723 -3.3652 0.7695 1.1929 -593.8731 -474.2662 -0.8519 -1.0643
0.3887 0.91 800 0.4920 -2.3074 -3.5196 0.7734 1.2122 -609.3139 -487.7755 -0.7242 -0.9597

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.14.1