File size: 3,704 Bytes
54869dc
 
 
 
 
 
 
 
 
 
 
 
c0ecacd
b0d5c4e
54869dc
fa3c5f7
c0ecacd
a7144e8
54869dc
 
c0ecacd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f17a801
c0ecacd
 
 
 
f17a801
54869dc
 
c0ecacd
54869dc
 
 
 
 
f17a801
 
 
54869dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f17a801
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
base_model:
- TheBloke/Llama-2-13B-fp16
- Masterjp123/SnowyRP-FinalV1-L2-13B
- Masterjp123/Snowyrp-V2B-P1
- sauce1337/BerrySauce-L2-13b
library_name: transformers
tags:
- mergekit
- merge

---
# Model
This is the Bf16 unquantized version of SnowyRP V2 Beta And the First Public Beta Model in the SnowyRP series of models!

[BF16](https://huggingface.co/Masterjp123/SnowyRP-V2-13B-L2_BetaTest)

NOTE: this model has gave me issues when I tried to quantize it, So I guess if you want, IDK get the bloke the do it, after all they do stuff better than me anyways.

## Merge Details
just originally made V2beta to be a test, But it seems like it is good, So I am quantizing it.

These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.

This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.

## Model Use:

This model is very good... WITH THE RIGHT SETTINGS.
I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.
```
    Optimal Settings (so far)

  Microstat Mode: 2
    tau: 2.95
    eta: 0.05

  Dynamic Temp
    min: 0.25
    max: 1.8

  Cut offs
    epsilon: 3
    eta: 3
```

### Merge Method

This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.

### Models Merged

The following models were included in the merge:
* [Masterjp123/SnowyRP-FinalV1-L2-13B](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)
* [posicube/Llama2-chat-AYB-13B](https://huggingface.co/posicube/Llama2-chat-AYB-13B)
* [Sao10K/Stheno-1.8-L2-13B](https://huggingface.co/Sao10K/Stheno-1.8-L2-13B)
* [ValiantLabs/ShiningValiantXS](https://huggingface.co/ValiantLabs/ShiningValiantXS)
* [sauce1337/BerrySauce-L2-13b](https://huggingface.co/sauce1337/BerrySauce-L2-13b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/Snowyrp-V2B-P1
    parameters:
      density: [1.0, 0.7, 0.1]
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/SnowyRP-FinalV1-L2-13B
    parameters:
      density: 0.5
      weight: [0.0, 0.3, 0.7, 1.0]
  - layer_range: [0, 40]
    model:
      model:
        path: sauce1337/BerrySauce-L2-13b
    parameters:
      density: 0.33
      weight:
      - filter: mlp
        value: 0.5
      - value: 0.0
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
```
for Masterjp123/Snowyrp-V2B-P1
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: Sao10K/Stheno-1.8-L2-13B
    parameters:
      density: [1.0, 0.7, 0.1]
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: ValiantLabs/ShiningValiantXS
    parameters:
      density: 0.5
      weight: [0.0, 0.3, 0.7, 1.0]
  - layer_range: [0, 40]
    model:
      model:
        path: posicube/Llama2-chat-AYB-13B
    parameters:
      density: 0.33
      weight:
      - filter: mlp
        value: 0.5
      - value: 0.0
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
```