neural-chat-7b-v3-3 / README.md
lvkaokao
update doc.
e53e64d
|
raw
history blame
No virus
633 Bytes
metadata
license: apache-2.0

Fine-tuning on Intel Gaudi2

merge our finetuned lora weights....

This model is a fine-tuned model based on mistralai/Mistral-7B-v0.1 on the open source dataset Open-Orca/SlimOrca. Then we align it with DPO algorithm. For more details, you can refer our blog: The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2.