Lewdiculous commited on
Commit
f5cf7bc
·
verified ·
1 Parent(s): f32c69e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - mergekit
5
+ - merge
6
+ - roleplay
7
+ - mistral
8
+ license: other
9
+ ---
10
+
11
+ This repository hosts GGUF-IQ-Imatrix quants for [ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b](https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b).
12
+
13
+ **What does "Imatrix" mean?**
14
+
15
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
16
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
17
+ The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
18
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
19
+
20
+ For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
21
+
22
+ **Steps:**
23
+
24
+ ```
25
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
26
+ ```
27
+ *Using the latest llama.cpp at the time.*
28
+
29
+ ```python
30
+ quantization_options = [
31
+ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
32
+ "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
33
+ ]
34
+ ```
35
+
36
+ ---
37
+
38
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/im1tX0e_w19J0J0ZrBZhx.jpeg)
39
+
40
+ This model was merged using the SLERP merge method.
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [Nitral-AI/Prima-LelantaclesV6.69-7b](https://huggingface.co/Nitral-AI/Prima-LelantaclesV6.69-7b)
46
+ * [Nitral-AI/Prima-LelantaclesV6.31-7b](https://huggingface.co/Nitral-AI/Prima-LelantaclesV6.31-7b)
47
+
48
+ ### Configuration
49
+
50
+ The following YAML configuration was used to produce this model:
51
+
52
+ ```yaml
53
+ slices:
54
+ - sources:
55
+ - model: Nitral-AI/Prima-LelantaclesV6.69-7b
56
+ layer_range: [0, 32]
57
+ - model: Nitral-AI/Prima-LelantaclesV6.31-7b
58
+ layer_range: [0, 32]
59
+ merge_method: slerp
60
+ base_model: Nitral-AI/Prima-LelantaclesV6.69-7b
61
+ parameters:
62
+ t:
63
+ - filter: self_attn
64
+ value: [0, 0.5, 0.3, 0.7, 1]
65
+ - filter: mlp
66
+ value: [1, 0.5, 0.7, 0.3, 0]
67
+ - value: 0.5
68
+ dtype: bfloat16
69
+ ```