zephyr-7b-dpo-full / README.md
wzhouad's picture
Model save
495445e verified
|
raw
history blame
No virus
3.49 kB
metadata
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full
    results: []

zephyr-7b-dpo-full

This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0465
  • Rewards/chosen: -2.6400
  • Rewards/rejected: -3.4900
  • Rewards/accuracies: 0.7227
  • Rewards/margins: 0.8499
  • Logps/rejected: -606.3505
  • Logps/chosen: -521.0439
  • Logits/rejected: -1.9091
  • Logits/chosen: -1.9501

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 5
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.1485 0.11 100 0.1803 -0.5621 -0.7737 0.6406 0.2117 -334.7263 -313.2471 -2.4998 -2.5133
0.0592 0.23 200 0.0662 -1.7402 -2.3280 0.6797 0.5878 -490.1518 -431.0574 -2.2396 -2.2729
0.0394 0.34 300 0.0494 -2.3707 -2.9767 0.6953 0.6061 -555.0248 -494.1047 -2.1101 -2.1389
0.0401 0.45 400 0.0523 -2.4275 -3.1076 0.7031 0.6801 -568.1116 -499.7916 -2.0429 -2.0799
0.0335 0.57 500 0.0461 -2.4063 -3.2276 0.7148 0.8213 -580.1129 -497.6727 -2.0057 -2.0456
0.0273 0.68 600 0.0409 -2.8465 -3.7152 0.7070 0.8687 -628.8741 -541.6862 -1.9162 -1.9558
0.0377 0.79 700 0.0496 -2.5317 -3.3682 0.7227 0.8365 -594.1712 -510.2102 -1.9274 -1.9673
0.0352 0.91 800 0.0465 -2.6400 -3.4900 0.7227 0.8499 -606.3505 -521.0439 -1.9091 -1.9501

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.14.1