lvkaokao commited on
Commit
292ea08
1 Parent(s): 8204f85

update blog.

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -4,7 +4,7 @@ license: apache-2.0
4
 
5
  ## Finetuning on [habana](https://habana.ai/) HPU
6
 
7
- This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69) and [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
8
 
9
  ## Model date
10
  Neural-chat-7b-v3-1 was trained between September and October, 2023.
 
4
 
5
  ## Finetuning on [habana](https://habana.ai/) HPU
6
 
7
+ This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
8
 
9
  ## Model date
10
  Neural-chat-7b-v3-1 was trained between September and October, 2023.