Lewdiculous commited on
Commit
654fc88
·
verified ·
1 Parent(s): c1dc176

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - NousResearch/Yarn-Mistral-7b-128k
4
+ - Test157t/Kunocchini-1.1-7b
5
+ library_name: transformers
6
+ tags:
7
+ - mistral
8
+ - quantized
9
+ - text-generation-inference
10
+ - merge
11
+ - mergekit
12
+ pipeline_tag: text-generation
13
+ inference: false
14
+ ---
15
+ # **GGUF-Imatrix quantizations for [Test157t/Kunocchini-1.2-7b-longtext](https://huggingface.co/Test157t/Kunocchini-1.2-7b-longtext/).**
16
+
17
+ SillyTavern preset files for the previous version are located [here](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/tree/main/ST%20presets).
18
+
19
+ *If you want any specific quantization to be added, feel free to ask.*
20
+
21
+ All credits belong to the [creator](https://huggingface.co/Test157t/).
22
+
23
+ `Base⇢ GGUF(F16)⇢ Imatrix(F16)⇢ GGUF-Imatrix(Quants)`
24
+
25
+ The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.60` or higher.
26
+
27
+ Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2254](https://github.com/ggerganov/llama.cpp/releases/tag/b2254).
28
+
29
+ For --imatrix data, `imatrix-Kunocchini-1.2-7b-longtext-F16.dat` was used.
30
+
31
+ # Original model information:
32
+
33
+ Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
34
+
35
+ Id like to also thank @kalomaze for the dope sampler additions to ST.
36
+
37
+ @SanjiWatsuki Thank you very much for the help, and the model!
38
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/1M16DsWk39CtFz2SjmYGr.jpeg)
39
+
40
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708).
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) + [Test157t/Kunocchini-1.1-7b](https://huggingface.co/Test157t/Kunocchini-1.1-7b)
46
+
47
+ ### Configuration
48
+
49
+ The following YAML configuration was used to produce this model:
50
+
51
+ ```yaml
52
+ merge_method: dare_ties
53
+ base_model: Test157t/Kunocchini-1.1-7b
54
+ parameters:
55
+ normalize: true
56
+ models:
57
+ - model: NousResearch/Yarn-Mistral-7b-128k
58
+ parameters:
59
+ weight: 1
60
+ - model: Test157t/Kunocchini-1.1-7b
61
+ parameters:
62
+ weight: 1
63
+ dtype: float16
64
+ ```