File size: 4,567 Bytes
50933ce
b241e04
 
 
 
 
1ee77f5
 
 
60597f4
b241e04
 
 
1ee77f5
 
 
 
 
beb4962
1ee77f5
 
 
83b44e8
50933ce
4e83611
 
b241e04
1f174ef
4e83611
 
 
1f174ef
 
4e83611
b241e04
 
452ff6e
1ee77f5
 
452ff6e
 
 
1ee77f5
ba1b016
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca2d3f2
b241e04
 
1ee77f5
b241e04
 
 
 
 
 
 
1ee77f5
 
 
60597f4
b241e04
 
 
 
 
1ee77f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b241e04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ee77f5
60597f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
base_model:
- Riiid/sheep-duck-llama-2-13b
- IkariDev/Athena-v4
- TheBloke/Llama-2-13B-fp16
- KoboldAI/LLaMA2-13B-Psyfighter2
- KoboldAI/LLaMA2-13B-Erebus-v3
- Henk717/echidna-tiefigther-25
- Undi95/Unholy-v2-13B
- ddh0/EstopianOrcaMaid-13b
tags:
- mergekit
- merge
- not-for-all-audiences
- ERP
- RP
- Roleplay
- uncensored
- GPTQ
license: llama2
language:
- en
inference: false
---
# Model
This is the GPTQ 4bit quantized version of SnowyRP

[BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)

[GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ)

[GGUF](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GGUF)

Any Future Quantizations I am made aware of will be added.

## Merge Details

just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.

These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.

This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.

## Model Use:

This model is very good... WITH THE RIGHT SETTINGS.
I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.
```
    Optimal Settings (so far)

  Microstat Mode: 2
    tau: 2.95
    eta: 0.05

  Dynamic Temp
    min: 0.25
    max: 1.8

  Cut offs
    epsilon: 3
    eta: 3
```
Go to the [BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B) Repo for more usage settings.
### Merge Method

This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.

### Models Merged

The following models were included in the merge:
* [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
* [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
* [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
* [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25)
* [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B)
* [EstopianOrcaMaid](https://huggingface.co/ddh0/EstopianOrcaMaid-13b)

### Configuration

The following YAML configuration was used to produce this model:

for P1
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: Undi95/Unholy-v2-13B
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Henk717/echidna-tiefigther-25
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Erebus-v3
    parameters:
      weight: 0.33
```

for P2
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Psyfighter2
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Riiid/sheep-duck-llama-2-13b
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: IkariDev/Athena-v4
    parameters:
      weight: 0.33
```

for the final merge
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: ddh0/EstopianOrcaMaid-13b
    parameters:
      density: [1.0, 0.7, 0.1]
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/snowyrpp1
    parameters:
      density: 0.5
      weight: [0.0, 0.3, 0.7, 1.0]
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/snowyrpp2
    parameters:
      density: 0.33
      weight:
      - filter: mlp
        value: 0.5
      - value: 0.0
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
```