lvkaokao
commited on
Commit
•
e53e64d
1
Parent(s):
de75fa7
update doc.
Browse files
README.md
CHANGED
@@ -4,6 +4,6 @@ license: apache-2.0
|
|
4 |
|
5 |
## Fine-tuning on Intel Gaudi2
|
6 |
|
7 |
-
merge lora weights....
|
8 |
|
9 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
|
|
|
4 |
|
5 |
## Fine-tuning on Intel Gaudi2
|
6 |
|
7 |
+
merge our finetuned lora weights....
|
8 |
|
9 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
|