--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - trl - dpo - generated_from_trainer model-index: - name: zephyr-7b-dpo-full results: [] --- # zephyr-7b-dpo-full This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0945 - Rewards/chosen: -1.3600 - Rewards/rejected: -2.1836 - Rewards/accuracies: 0.7656 - Rewards/margins: 0.8237 - Logps/rejected: -475.7151 - Logps/chosen: -393.0347 - Logits/rejected: -2.3019 - Logits/chosen: -2.3254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 5 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1643 | 0.21 | 100 | 0.1558 | -0.4076 | -0.7972 | 0.7461 | 0.3896 | -337.0709 | -297.7996 | -2.7691 | -2.7902 | | 0.1003 | 0.42 | 200 | 0.0997 | -1.2712 | -1.9340 | 0.7031 | 0.6629 | -450.7552 | -384.1553 | -2.5137 | -2.5340 | | 0.0953 | 0.63 | 300 | 0.1024 | -1.2036 | -1.9243 | 0.7539 | 0.7207 | -449.7823 | -377.3981 | -2.3837 | -2.4030 | | 0.0811 | 0.84 | 400 | 0.0945 | -1.3600 | -2.1836 | 0.7656 | 0.8237 | -475.7151 | -393.0347 | -2.3019 | -2.3254 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1