grimjim commited on
Commit
dced1d0
1 Parent(s): 1136138

Initial release

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +57 -0
  3. kukulemon-7B.Q8_0.gguf +3 -0
.gitattributes CHANGED
@@ -4,6 +4,7 @@
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
 
7
  *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gguf filter=lfs diff=lfs merge=lfs -text
8
  *.gz filter=lfs diff=lfs merge=lfs -text
9
  *.h5 filter=lfs diff=lfs merge=lfs -text
10
  *.joblib filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,60 @@
1
  ---
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
4
+ - KatyTheCutie/LemonadeRP-4.5.3
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
  license: cc-by-nc-4.0
10
  ---
11
+ # kukulemon-7B-GGUF
12
+
13
+ This is a Q8_0 GGUF quant of (kukulemon-7B)[https://huggingface.co/grimjim/kukulemon-7B].
14
+
15
+ A merger of two similar Kunoichi models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
16
+
17
+ I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, it seemed to lose coherence after 8K in my informal testing.
18
+
19
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
+
21
+ You can also download [GGUF-IQ-Imatrix quants courtesy of Lewdiculous](https://huggingface.co/Lewdiculous/kukulemon-7B-GGUF-IQ-Imatrix/).
22
+
23
+ There's also an [8.0bpw h8 exl2](https://huggingface.co/grimjim/kukulemon-7B-8.0bpw_h8_exl2) quant available.
24
+
25
+ ## Merge Details
26
+ ### Merge Method
27
+
28
+ This model was merged using the SLERP merge method.
29
+
30
+ ### Models Merged
31
+
32
+ The following models were included in the merge:
33
+ * [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B)
34
+ * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
35
+
36
+ ### Configuration
37
+
38
+ The following YAML configuration was used to produce this model:
39
+
40
+ ```yaml
41
+ slices:
42
+ - sources:
43
+ - model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
44
+ layer_range: [0, 32]
45
+ - model: KatyTheCutie/LemonadeRP-4.5.3
46
+ layer_range: [0, 32]
47
+ # or, the equivalent models: syntax:
48
+ # models:
49
+ merge_method: slerp
50
+ base_model: KatyTheCutie/LemonadeRP-4.5.3
51
+ parameters:
52
+ t:
53
+ - filter: self_attn
54
+ value: [0, 0.5, 0.3, 0.7, 1]
55
+ - filter: mlp
56
+ value: [1, 0.5, 0.7, 0.3, 0]
57
+ - value: 0.5 # fallback for rest of tensors
58
+ dtype: float16
59
+
60
+ ```
kukulemon-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cc5b7822475c8974aaea43d5dbb8ea86e2d93b4dff49fb54bd9f8db5779579e
3
+ size 7695857376