Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ This is a very promising roleplay model cooked by the amazing Sao10K!
|
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
-
> This was a bit more disk and compute intensive but hopefully avoided any losses
|
22 |
> If you noticed any issues let me know in the discussions.
|
23 |
|
24 |
> [!NOTE]
|
|
|
18 |
> **Quantization process:** <br>
|
19 |
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
20 |
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
|
21 |
+
> This was a bit more disk and compute intensive but hopefully avoided any losses during conversion. <br>
|
22 |
> If you noticed any issues let me know in the discussions.
|
23 |
|
24 |
> [!NOTE]
|