zephyr-7b-dpo-full-gpt-high-curriculum
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5179
- Rewards/chosen: -0.8125
- Rewards/rejected: -1.5680
- Rewards/accuracies: 0.7241
- Rewards/margins: 0.7555
- Logps/rejected: -402.4431
- Logps/chosen: -365.2540
- Logits/rejected: 1.3741
- Logits/chosen: 0.3005
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6573 | 0.1147 | 50 | 0.6482 | -0.0593 | -0.1537 | 0.6422 | 0.0944 | -261.0173 | -289.9336 | -2.4140 | -2.5168 |
0.5517 | 0.2294 | 100 | 0.5831 | -0.4867 | -1.0049 | 0.6940 | 0.5182 | -346.1422 | -332.6784 | -0.1634 | -0.6358 |
0.5596 | 0.3440 | 150 | 0.5497 | -0.4238 | -1.0012 | 0.7241 | 0.5774 | -345.7715 | -326.3861 | -0.2421 | -1.0045 |
0.557 | 0.4587 | 200 | 0.5398 | -0.7669 | -1.4634 | 0.7328 | 0.6965 | -391.9895 | -360.6939 | 0.6306 | -0.2433 |
0.5483 | 0.5734 | 250 | 0.5334 | -0.9092 | -1.6482 | 0.7371 | 0.7390 | -410.4661 | -374.9231 | 1.1694 | 0.1535 |
0.5338 | 0.6881 | 300 | 0.5227 | -0.7072 | -1.4506 | 0.7241 | 0.7434 | -390.7057 | -354.7213 | 1.1201 | 0.0530 |
0.5111 | 0.8028 | 350 | 0.5173 | -0.7777 | -1.5283 | 0.7284 | 0.7506 | -398.4796 | -361.7773 | 1.3205 | 0.2474 |
0.5185 | 0.9174 | 400 | 0.5179 | -0.8125 | -1.5680 | 0.7241 | 0.7555 | -402.4431 | -365.2540 | 1.3741 | 0.3005 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for sfulay/zephyr-7b-dpo-full-gpt-high-curriculum
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full