zephyr-7b-dpo-full / README.md
wzhouad's picture
Model save
e022723 verified
|
raw
history blame
No virus
4.26 kB
metadata
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full
    results: []

zephyr-7b-dpo-full

This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5440
  • Rewards/chosen: -2.2940
  • Rewards/rejected: -3.0054
  • Rewards/accuracies: 0.7090
  • Rewards/margins: 0.7114
  • Logps/rejected: -451.6765
  • Logps/chosen: -373.9785
  • Logits/rejected: 0.3244
  • Logits/chosen: 0.0742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6789 0.08 100 0.6770 -0.1062 -0.1422 0.5914 0.0360 -165.3552 -155.1927 -2.7255 -2.7337
0.6062 0.16 200 0.6079 -1.0212 -1.3873 0.6670 0.3660 -289.8622 -246.6971 -2.3696 -2.3856
0.5965 0.24 300 0.5907 -1.3779 -1.8008 0.6623 0.4229 -331.2100 -282.3621 -2.2450 -2.2656
0.5729 0.32 400 0.5711 -1.6763 -2.2404 0.6828 0.5640 -375.1720 -312.2064 -1.2920 -1.3760
0.5645 0.4 500 0.5639 -2.0721 -2.6869 0.6987 0.6147 -419.8194 -351.7883 -0.6091 -0.7860
0.5513 0.48 600 0.5582 -2.9237 -3.5389 0.7108 0.6152 -505.0223 -436.9386 0.1224 -0.1054
0.5571 0.56 700 0.5559 -2.7971 -3.5456 0.7043 0.7485 -505.6961 -424.2823 0.2980 0.0356
0.5609 0.64 800 0.5469 -2.4314 -3.0831 0.7108 0.6517 -459.4439 -387.7092 0.1922 -0.0312
0.5514 0.72 900 0.5474 -2.4774 -3.2082 0.6996 0.7308 -471.9533 -392.3096 0.5382 0.2860
0.527 0.8 1000 0.5454 -2.5040 -3.2071 0.7080 0.7031 -471.8454 -394.9711 0.6372 0.3871
0.5487 0.88 1100 0.5444 -2.2851 -2.9963 0.7090 0.7112 -450.7599 -373.0831 0.4336 0.1858
0.5483 0.96 1200 0.5440 -2.2940 -3.0054 0.7090 0.7114 -451.6765 -373.9785 0.3244 0.0742

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.14.1