File size: 2,028 Bytes
70acb52
bdc8bef
 
 
 
 
 
 
70acb52
2bc260c
70acb52
83056ed
bdc8bef
 
 
99dbcd2
 
 
2bc260c
 
 
bdc8bef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
base_model:
- grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
- KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kukulemon-7B-8.0bpw-h8_exl2

This is an 8.0bpw h8 exl2 quant of a merger of two similar models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.

I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, but I found it lost coherence after 8K in informal testing. I prefer to stick with 8.0bpw h8 exl2 or Q8_0 GGUF for maximum coherence.

Alternative downloads:
- [iMatrix GGUF quants courtesy of Lewdiculous](https://huggingface.co/Lewdiculous/kukulemon-7B-GGUF-IQ-Imatrix)
- [fp16 safetensors](https://huggingface.co/grimjim/kukulemon-7B)
- [GGUF quants](https://huggingface.co/grimjim/kukulemon-7B-GGUF)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
        layer_range: [0, 32]
      - model: KatyTheCutie/LemonadeRP-4.5.3
        layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```