ABX-AI commited on
Commit
5fb93f3
1 Parent(s): 2b5362b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - cgato/TheSpice-7b-v0.1.1
4
+ - ABX-AI/Laymonade-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - not-for-all-audiences
10
+ license: other
11
+ ---
12
+ # GGUF / IQ / Imatrix for [Spicy-Laymonade-7B](https://huggingface.co/ABX-AI/Spicy-Laymonade-7B)
13
+
14
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/bMW7mRqBS_xQJBXn-szWS.png)
15
+
16
+ **Why Importance Matrix?**
17
+
18
+ **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
19
+ The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
20
+
21
+ Related discussions in Github:
22
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
23
+
24
+ The imatrix.txt file that I used contains general, semi-random data, with some custom kink.
25
+
26
+ # Spicy-Laymonade-7B
27
+
28
+ Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.
29
+
30
+ However, I did try it out, and it seemed to work pretty well.
31
+
32
+ ## Merge Details
33
+
34
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
35
+
36
+ ### Merge Method
37
+
38
+ This model was merged using the SLERP merge method.
39
+
40
+ ### Models Merged
41
+
42
+ The following models were included in the merge:
43
+ * [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
44
+ * [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B)
45
+
46
+ ### Configuration
47
+
48
+ The following YAML configuration was used to produce this model:
49
+
50
+ ```yaml
51
+ slices:
52
+ - sources:
53
+ - model: cgato/TheSpice-7b-v0.1.1
54
+ layer_range: [0, 32]
55
+ - model: ABX-AI/Laymonade-7B
56
+ layer_range: [0, 32]
57
+ merge_method: slerp
58
+ base_model: ABX-AI/Laymonade-7B
59
+ parameters:
60
+ t:
61
+ - filter: self_attn
62
+ value: [0.7, 0.3, 0.6, 0.2, 0.5]
63
+ - filter: mlp
64
+ value: [0.3, 0.7, 0.4, 0.8, 0.5]
65
+ - value: 0.5
66
+ dtype: bfloat16
67
+
68
+ ```