Lewdiculous
commited on
Commit
•
4894bc9
1
Parent(s):
e559177
Update README.md
Browse files
README.md
CHANGED
@@ -17,10 +17,19 @@ inference: false
|
|
17 |
---
|
18 |
|
19 |
This repository hosts GGUF-Imatrix quantizations for [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
```
|
21 |
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
|
22 |
```
|
23 |
-
|
24 |
```python
|
25 |
quantization_options = [
|
26 |
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
|
|
|
17 |
---
|
18 |
|
19 |
This repository hosts GGUF-Imatrix quantizations for [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B).
|
20 |
+
|
21 |
+
**What does "Imatrix" mean?**
|
22 |
+
|
23 |
+
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
|
24 |
+
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
|
25 |
+
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
|
26 |
+
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
27 |
+
|
28 |
+
**Steps:**
|
29 |
```
|
30 |
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
|
31 |
```
|
32 |
+
**Quants:**
|
33 |
```python
|
34 |
quantization_options = [
|
35 |
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
|