Lewdiculous commited on
Commit
029841e
1 Parent(s): 64f1740

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - mistral
5
+ - quantized
6
+ - text-generation-inference
7
+ - roleplay
8
+ - gguf
9
+ pipeline_tag: text-generation
10
+ inference: false
11
  license: cc-by-4.0
12
  ---
13
+
14
+ ## GGUF-Imatrix quantizations for [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B/).
15
+
16
+ All credits belong to the author.
17
+
18
+ If you like these also check out [FantasiaFoundry's GGUF-Quantization-Script](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script).
19
+
20
+ ## What does "Imatrix" mean?
21
+
22
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. <br>
23
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006/) <br>
24
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance and lead to better performance, especially when the calibration data is diverse. <br>
25
+ [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384/)
26
+
27
+ For --imatrix data, included `imatrix.dat` was used.
28
+
29
+ Using [llama.cpp-b2327](https://github.com/ggerganov/llama.cpp/releases/tag/b2327/):
30
+
31
+ ```
32
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
33
+ ```
34
+
35
+ The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
36
+
37
+ If you want any specific quantization to be added, feel free to ask.
38
+
39
+ <!-- ## Model image: -->
40
+
41
+ ## Original model information:
42
+
43
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/AU4YsdxSuyVM0vVh27Cu-.png)
44
+
45
+ # Erosumika-7B
46
+
47
+ This is an attempt to create a model that combines multiple "established" 7Bs and a very small WIP private dataset with [Eros'](https://huggingface.co/tavtav/eros-7b-test) raw creative power. In terms of instruction formats, ChatML and Alpaca work best. The merge isn't purely ChatML, and as such, my previous attempts to integrate it with ChatML strings out of the box were Sisyphean and uninformed.
48
+
49
+ [GGUF](https://huggingface.co/localfultonextractor/Erosumika-7B-GGUF)
50
+
51
+ [exl2, 4bpw](https://huggingface.co/localfultonextractor/Erosumika-7B-4.0bpw-exl2)
52
+
53
+ [exl2, 6bpw](https://huggingface.co/localfultonextractor/Erosumika-7B-6.0bpw-exl2)
54
+
55
+ # Merge config.yml:
56
+ * I was asked to upload the merge configuration I used, sadly the one for the 'sumitest02' model is lost to time, like tears in rain:
57
+ ```slices:
58
+ - sources:
59
+ - model: localfultonextractor/sumitest02
60
+ layer_range: [0, 32]
61
+ - model: tavtav/eros-7b-test
62
+ layer_range: [0, 32]
63
+ merge_method: slerp
64
+ base_model: localfultonextractor/sumitest02
65
+ parameters:
66
+ t:
67
+ - filter: self_attn
68
+ value: [0, 0.2, 0.4, 0.55, 0.8]
69
+ - filter: mlp
70
+ value: [0.7, 0.3, 0.4, 0.3, 0]
71
+ - value: 0.37 # fallback for rest of tensors
72
+ dtype: float16
73
+ ```