zephyr-7b-dpo-full
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
- Loss: 0.3160
- Rewards/chosen: -4.1121
- Rewards/rejected: -8.3353
- Rewards/accuracies: 0.8201
- Rewards/margins: 4.2232
- Logps/rejected: -1123.6224
- Logps/chosen: -702.1752
- Logits/rejected: 0.5403
- Logits/chosen: -0.4404
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.5381 | 0.1152 | 100 | 0.4758 | -1.9882 | -2.9171 | 0.7270 | 0.9288 | -581.7981 | -489.7893 | -2.8822 | -2.9045 |
0.4268 | 0.2303 | 200 | 0.3577 | -3.9068 | -6.8487 | 0.7976 | 2.9419 | -974.9606 | -681.6494 | -0.6781 | -0.9791 |
0.4067 | 0.3455 | 300 | 0.3411 | -3.9757 | -7.6481 | 0.8094 | 3.6724 | -1054.9027 | -688.5351 | -0.6642 | -1.2474 |
0.4011 | 0.4607 | 400 | 0.3295 | -4.4449 | -8.4011 | 0.8156 | 3.9562 | -1130.1991 | -735.4550 | 0.1183 | -0.7429 |
0.3727 | 0.5759 | 500 | 0.3260 | -3.7203 | -7.6540 | 0.8161 | 3.9337 | -1055.4913 | -662.9987 | -0.4066 | -1.3009 |
0.3933 | 0.6910 | 600 | 0.3190 | -3.7331 | -7.5182 | 0.8257 | 3.7851 | -1041.9088 | -664.2776 | 0.3247 | -0.5819 |
0.3858 | 0.8062 | 700 | 0.3166 | -3.9569 | -8.0356 | 0.8246 | 4.0787 | -1093.6547 | -686.6614 | 0.3586 | -0.6058 |
0.3785 | 0.9214 | 800 | 0.3161 | -4.1174 | -8.3387 | 0.8212 | 4.2213 | -1123.9625 | -702.7068 | 0.5558 | -0.4246 |
Framework versions
- Transformers 4.44.1
- Pytorch 2.1.2+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for NicholasCorrado/zephyr-7b-dpo-full
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full