File size: 2,010 Bytes
93b211f
 
 
 
 
 
 
 
0c0288d
93b211f
c82c7bf
196db6b
0c0288d
 
 
 
 
 
 
 
 
 
 
 
 
93b211f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0c0288d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- Replete-AI/WizardLM-2-7b
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
![LemonWizard](https://files.catbox.moe/rrosn1.png)

# Intent
The intent was to combine the excellent LemonadeRP-4.5.3 with WizardLM-2 in order to produce more effective uncensored content. While WizardLM-2 wouldn't balk at uncensored content, it would still falter in actually producing it whereas LemonadeRP didn't have this issue. The results are pretty good imo. There's a problem that if your response length is too long it will start to speak for the user but those usually disappear on swipes.

I had originally not intended to release this model and instead keep it private. It's my first foray into doing merges at all and I didn't want to release a subpar model. However, after encouragement I've decided to unprivate it. Hope you all get some enjoyment out of it.

# Prompt - Alpaca

Using the Alpaca prompt seems to get good results.

# Context Size - 8192

Haven't tested beyond this. Usual rule of thumb is that once you get up to 12k your responses tend to become less coherent and 16k is where things just devolve completely.

# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [Replete-AI/WizardLM-2-7b](https://huggingface.co/Replete-AI/WizardLM-2-7b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Replete-AI/WizardLM-2-7b
  - model: KatyTheCutie/LemonadeRP-4.5.3
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```