FelixChao commited on
Commit
9510554
1 Parent(s): 07bf131

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -102,11 +102,15 @@ WestSeverus-7B-DPO-v2 was trained using the ChatML prompt templates with system
102
  * **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
103
  * **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
104
 
 
 
 
 
105
  ## 🙏 Gratitude
106
 
107
  * Thanks to @senseable for [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2).
108
  * Thanks to @jondurbin for [jondurbin/truthy-dpo-v0.1 dataset](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
109
  * Thanks to @Charles Goddard for MergeKit.
110
- * Thanks to @TheBloke, @s3nh for Quantized Models.
111
  * Thanks to @mlabonne, @CultriX for YALL - Yet Another LLM Leaderboard.
112
  * Thank you to all the other people in the Open Source AI community who utilized this model for further research and improvement.
 
102
  * **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
103
  * **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
104
 
105
+ ### MaziyarPanahi/WestSeverus-7B-DPO-v2-GGUF
106
+
107
+ * **GGUF**: https://huggingface.co/MaziyarPanahi/WestSeverus-7B-DPO-v2-GGUF
108
+
109
  ## 🙏 Gratitude
110
 
111
  * Thanks to @senseable for [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2).
112
  * Thanks to @jondurbin for [jondurbin/truthy-dpo-v0.1 dataset](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
113
  * Thanks to @Charles Goddard for MergeKit.
114
+ * Thanks to @TheBloke, @s3nh, @MaziyarPanahi for Quantized Models.
115
  * Thanks to @mlabonne, @CultriX for YALL - Yet Another LLM Leaderboard.
116
  * Thank you to all the other people in the Open Source AI community who utilized this model for further research and improvement.