metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full
results: []
zephyr-7b-dpo-full
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5280
- Rewards/chosen: -0.0160
- Rewards/rejected: -1.2503
- Rewards/accuracies: 0.7798
- Rewards/margins: 1.2343
- Logps/rejected: -272.7223
- Logps/chosen: -282.1141
- Logits/rejected: -2.5362
- Logits/chosen: -2.5901
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.5446 | 0.1047 | 100 | 0.5753 | 1.0111 | 0.3529 | 0.7242 | 0.6581 | -256.6898 | -271.8434 | -2.5161 | -2.5743 |
0.5475 | 0.2093 | 200 | 0.5464 | 0.4347 | -0.4824 | 0.7639 | 0.9172 | -265.0432 | -277.6068 | -2.5380 | -2.5923 |
0.5359 | 0.3140 | 300 | 0.5473 | 0.0697 | -1.0170 | 0.7579 | 1.0867 | -270.3889 | -281.2571 | -2.5066 | -2.5596 |
0.5228 | 0.4186 | 400 | 0.5321 | -0.2311 | -1.3065 | 0.7540 | 1.0754 | -273.2837 | -284.2652 | -2.5933 | -2.6471 |
0.5217 | 0.5233 | 500 | 0.5260 | 0.0143 | -1.2073 | 0.7877 | 1.2216 | -272.2919 | -281.8111 | -2.5195 | -2.5773 |
0.517 | 0.6279 | 600 | 0.5262 | -0.2922 | -1.4562 | 0.7698 | 1.1640 | -274.7808 | -284.8755 | -2.5183 | -2.5744 |
0.4766 | 0.7326 | 700 | 0.5279 | -0.0183 | -1.2936 | 0.7798 | 1.2753 | -273.1544 | -282.1366 | -2.5194 | -2.5751 |
0.4894 | 0.8373 | 800 | 0.5257 | -0.0567 | -1.2594 | 0.7778 | 1.2027 | -272.8127 | -282.5211 | -2.5311 | -2.5851 |
0.4722 | 0.9419 | 900 | 0.5280 | -0.0160 | -1.2503 | 0.7798 | 1.2343 | -272.7223 | -282.1141 | -2.5362 | -2.5901 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1