Edit model card

zephyr-7b-gpo-iter2

This model is a fine-tuned version of DUAL-GPO/zephyr-7b-gpo-iter1 on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0114
  • Rewards/chosen: -0.0874
  • Rewards/rejected: -0.0645
  • Rewards/accuracies: 0.3940
  • Rewards/margins: -0.0229
  • Logps/rejected: -264.6114
  • Logps/chosen: -288.2511
  • Logits/rejected: -2.1907
  • Logits/chosen: -2.3882

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.0012 0.3 100 0.0016 -0.0164 -0.0160 0.5035 -0.0005 -259.7555 -281.1500 -2.1644 -2.3583
0.0011 0.61 200 0.0018 -0.0088 -0.0077 0.4815 -0.0011 -258.9317 -280.3858 -2.1837 -2.3781
0.0015 0.91 300 0.0019 -0.0167 -0.0149 0.4805 -0.0017 -259.6521 -281.1740 -2.1796 -2.3740
0.0397 1.22 400 0.0074 -0.0779 -0.0627 0.4160 -0.0151 -264.4323 -287.2935 -2.1632 -2.3568
0.0305 1.52 500 0.0117 -0.0898 -0.0668 0.3945 -0.0230 -264.8388 -288.4842 -2.1902 -2.3875
0.0366 1.82 600 0.0115 -0.0876 -0.0647 0.4000 -0.0230 -264.6301 -288.2723 -2.1900 -2.3873

Framework versions

  • PEFT 0.7.1
  • Transformers 4.36.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.14.6
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Adapter for

Dataset used to train DUAL-GPO/zephyr-7b-gpo-iter2