zephyr-7b-dpo-full / README.md
wzhouad's picture
Model save
b30172b verified
|
raw
history blame
No virus
3.49 kB
metadata
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full
    results: []

zephyr-7b-dpo-full

This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4965
  • Rewards/chosen: -2.9708
  • Rewards/rejected: -4.3017
  • Rewards/accuracies: 0.7695
  • Rewards/margins: 1.3309
  • Logps/rejected: -687.5271
  • Logps/chosen: -554.1226
  • Logits/rejected: -0.1928
  • Logits/chosen: -0.6531

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 3
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5326 0.11 100 0.6180 -0.4024 -0.6993 0.6797 0.2969 -327.2873 -297.2842 -2.5800 -2.5958
0.4709 0.23 200 0.5608 -1.1383 -1.7616 0.7109 0.6233 -433.5121 -370.8716 -2.1515 -2.1720
0.4289 0.34 300 0.5293 -1.5404 -2.3958 0.7539 0.8554 -496.9380 -411.0811 -2.0882 -2.1204
0.4195 0.45 400 0.5096 -1.7916 -2.8995 0.7812 1.1079 -547.3041 -436.1970 -1.0571 -1.2976
0.3891 0.57 500 0.5086 -2.6047 -3.9255 0.7812 1.3208 -649.9016 -517.5072 -0.8608 -1.1314
0.4182 0.68 600 0.4976 -2.4968 -3.7962 0.7695 1.2994 -636.9742 -506.7195 -0.4354 -0.8384
0.3845 0.79 700 0.4967 -2.6976 -4.0084 0.7695 1.3108 -658.1885 -526.7999 -0.2826 -0.7200
0.3896 0.91 800 0.4965 -2.9708 -4.3017 0.7695 1.3309 -687.5271 -554.1226 -0.1928 -0.6531

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.14.1