Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,6 @@ license: apache-2.0
|
|
15 |
|
16 |
Dolphin 2.6 Mistral 7b - DPO 🐬
|
17 |
|
18 |
-
This is a quantized GGUF version of dolphin-2.6-mistral-7b to 4_0, 8_0 bits and the converted 16 FP model.
|
19 |
|
20 |
(link to the original model : https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
|
|
|
15 |
|
16 |
Dolphin 2.6 Mistral 7b - DPO 🐬
|
17 |
|
18 |
+
This is a quantized GGUF version of dolphin-2.6-mistral-7b DPO to 4_0, 8_0 bits and the converted 16 FP model.
|
19 |
|
20 |
(link to the original model : https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
|