--- license: apache-2.0 tags: - dpo dataset: - Intel/orca_dpo_pairs base_model: - teknium/OpenHermes-2.5-Mistral-7B --- # mistral-7b-neuralhermes-2.5-dpo mistral-7b-neuralhermes-2.5-dpo is a DPO fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset. ### LoRA - r: 16 - LoRA alpha: 16 - LoRA dropout: 0.05 ### Training arguments - Batch size: 4 - Gradient accumulation steps: 4 - Optimizer: paged_adamw_32bit - Max steps: 100 - Learning rate: 5e-05 - Learning rate scheduler type: cosine - Beta: 0.1 - Max prompt length: 1024 - Max length: 1536