metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full
results: []
zephyr-7b-dpo-full
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2144
- Rewards/chosen: -2.2032
- Rewards/rejected: -6.3258
- Rewards/accuracies: 0.8984
- Rewards/margins: 4.1226
- Logps/rejected: -812.5331
- Logps/chosen: -477.7561
- Logits/rejected: 1.2277
- Logits/chosen: -0.1965
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.3617 | 0.21 | 100 | 0.3445 | -1.1381 | -2.9522 | 0.8633 | 1.8140 | -475.1705 | -371.2483 | -1.3192 | -1.5665 |
0.2941 | 0.42 | 200 | 0.2595 | -1.5303 | -4.5965 | 0.8711 | 3.0662 | -639.6045 | -410.4631 | -0.2909 | -1.0255 |
0.259 | 0.63 | 300 | 0.2187 | -2.2257 | -6.1115 | 0.8945 | 3.8858 | -791.1059 | -480.0016 | 1.2573 | -0.0803 |
0.2268 | 0.84 | 400 | 0.2144 | -2.2032 | -6.3258 | 0.8984 | 4.1226 | -812.5331 | -477.7561 | 1.2277 | -0.1965 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.2