--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - trl - dpo - generated_from_trainer model-index: - name: zephyr-7b-dpo-full results: [] --- # zephyr-7b-dpo-full This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6824 - Rewards/chosen: -4.2277 - Rewards/rejected: -7.2864 - Rewards/accuracies: 0.7773 - Rewards/margins: 3.0587 - Logps/rejected: -408.3961 - Logps/chosen: -347.1476 - Logits/rejected: -0.8310 - Logits/chosen: -1.2135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected | |:-------------:|:-----:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:| | 0.582 | 0.21 | 100 | -2.5812 | -2.5431 | -254.2386 | -263.2876 | 0.5878 | 0.7188 | 0.4177 | 0.4488 | -0.0310 | | 0.558 | 0.42 | 200 | -2.3893 | -2.3398 | -261.4734 | -280.4191 | 0.5196 | 0.7773 | 0.0560 | 0.9436 | -0.8876 | | 0.4914 | 0.63 | 300 | -2.3653 | -2.3039 | -264.2936 | -286.6201 | 0.5110 | 0.7656 | -0.0850 | 1.1126 | -1.1976 | | 0.4922 | 0.84 | 400 | -2.3145 | -2.2570 | -263.2854 | -285.4248 | 0.5095 | 0.7852 | -0.0346 | 1.1033 | -1.1379 | | 0.1908 | 1.05 | 500 | -2.2442 | -2.1660 | -269.8426 | -300.6474 | 0.5179 | 0.7852 | -0.3625 | 1.5366 | -1.8990 | | 0.1675 | 1.26 | 600 | -2.2220 | -2.1249 | -287.2300 | -324.0812 | 0.5377 | 0.8008 | -1.2318 | 1.8389 | -3.0707 | | 0.1567 | 1.46 | 700 | -2.0453 | -1.9285 | -298.7820 | -333.3354 | 0.5348 | 0.7891 | -1.8094 | 1.7240 | -3.5334 | | 0.1475 | 1.67 | 800 | -2.2409 | -2.1202 | -296.3533 | -332.4951 | 0.5382 | 0.8008 | -1.6880 | 1.8034 | -3.4914 | | 0.1422 | 1.88 | 900 | -2.1980 | -2.0630 | -296.0324 | -335.6016 | 0.5518 | 0.7852 | -1.6719 | 1.9748 | -3.6467 | | 0.044 | 2.09 | 1000 | -1.7406 | -1.4629 | -316.4520 | -365.4959 | 0.6058 | 0.7891 | -2.6929 | 2.4485 | -5.1414 | | 0.0307 | 2.3 | 1100 | -1.3310 | -0.9162 | -337.0383 | -397.1617 | 0.6700 | 0.7695 | -3.7222 | 3.0025 | -6.7247 | | 0.0317 | 2.51 | 1200 | 0.6711 | -3.9616 | -6.9639 | 0.7773 | 3.0023 | -401.9448 | -341.8261 | -0.9227 | -1.2927 | | 0.0264 | 2.72 | 1300 | 0.6778 | -4.2314 | -7.2584 | 0.7773 | 3.0270 | -407.8352 | -347.2216 | -0.8370 | -1.2190 | | 0.0343 | 2.93 | 1400 | 0.6824 | -4.2277 | -7.2864 | 0.7773 | 3.0587 | -408.3961 | -347.1476 | -0.8310 | -1.2135 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.2