zephyr-7b-dpo-full / README.md
RikkiXu's picture
End of training
eeab4a3 verified
metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
  - alignment-handbook
  - trl
  - dpo
  - generated_from_trainer
  - trl
  - dpo
  - generated_from_trainer
datasets:
  - HuggingFaceH4/ultrafeedback_binarized
model-index:
  - name: zephyr-7b-dpo-full
    results: []

zephyr-7b-dpo-full

This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2141
  • Rewards/chosen: 4.2427
  • Rewards/rejected: -5.5362
  • Rewards/accuracies: 0.9141
  • Rewards/margins: 9.7789
  • Logps/rejected: -191.0266
  • Logps/chosen: -248.9478
  • Logits/rejected: -2.4882
  • Logits/chosen: -2.5045

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.3315 0.21 100 0.2975 3.4373 -3.8279 0.9023 7.2653 -187.6101 -250.5587 -2.5158 -2.5301
0.2909 0.42 200 0.2754 4.8618 -4.1200 0.9180 8.9818 -188.1942 -247.7097 -2.5310 -2.5459
0.6445 0.63 300 0.2245 4.2165 -5.3713 0.9102 9.5878 -190.6968 -249.0004 -2.4915 -2.5059
0.2653 0.84 400 0.2103 4.5125 -5.2808 0.9258 9.7933 -190.5157 -248.4084 -2.4965 -2.5123

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.16.1
  • Tokenizers 0.15.2