Masterjp123 commited on
Commit
112a067
1 Parent(s): 9a5e7b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -11
README.md CHANGED
@@ -1,34 +1,125 @@
1
  ---
2
  base_model:
3
- - ddh0/EstopianOrcaMaid-13b
4
- - Masterjp123/snowyrpp2
5
  - TheBloke/Llama-2-13B-fp16
6
- - Masterjp123/snowyrpp1
 
 
 
 
7
  tags:
8
  - mergekit
9
  - merge
10
-
 
 
 
 
 
 
 
11
  ---
12
- # merged
 
 
 
13
 
14
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
15
 
16
  ## Merge Details
 
 
 
 
17
  ### Merge Method
18
 
19
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
20
 
21
  ### Models Merged
22
 
23
  The following models were included in the merge:
24
- * [ddh0/EstopianOrcaMaid-13b](https://huggingface.co/ddh0/EstopianOrcaMaid-13b)
25
- * [Masterjp123/snowyrpp2](https://huggingface.co/Masterjp123/snowyrpp2)
26
- * [Masterjp123/snowyrpp1](https://huggingface.co/Masterjp123/snowyrpp1)
 
 
 
 
27
 
28
  ### Configuration
29
 
30
  The following YAML configuration was used to produce this model:
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ```yaml
33
  base_model:
34
  model:
@@ -68,4 +159,4 @@ slices:
68
  model:
69
  model:
70
  path: TheBloke/Llama-2-13B-fp16
71
- ```
 
1
  ---
2
  base_model:
3
+ - Riiid/sheep-duck-llama-2-13b
4
+ - IkariDev/Athena-v4
5
  - TheBloke/Llama-2-13B-fp16
6
+ - KoboldAI/LLaMA2-13B-Psyfighter2
7
+ - KoboldAI/LLaMA2-13B-Erebus-v3
8
+ - Henk717/echidna-tiefigther-25
9
+ - Undi95/Unholy-v2-13B
10
+ - ddh0/EstopianOrcaMaid-13b
11
  tags:
12
  - mergekit
13
  - merge
14
+ - not-for-all-audiences
15
+ - ERP
16
+ - RP
17
+ - Roleplay
18
+ - uncensored
19
+ license: llama2
20
+ language:
21
+ - en
22
  ---
23
+ # Model
24
+ This is the GPTQ 4bit quantized version of SnowyRP
25
+
26
+ [FP16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)
27
 
28
+ [GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ)
29
+
30
+ Any Future Quantizations I am made aware of will be added.
31
 
32
  ## Merge Details
33
+ just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.
34
+
35
+ These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit it
36
+
37
  ### Merge Method
38
 
39
+ This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
40
 
41
  ### Models Merged
42
 
43
  The following models were included in the merge:
44
+ * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
45
+ * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
46
+ * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
47
+ * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
48
+ * [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25)
49
+ * [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B)
50
+ * [EstopianOrcaMaid](https://huggingface.co/ddh0/EstopianOrcaMaid-13b)
51
 
52
  ### Configuration
53
 
54
  The following YAML configuration was used to produce this model:
55
 
56
+ for P1
57
+ ```yaml
58
+ base_model:
59
+ model:
60
+ path: TheBloke/Llama-2-13B-fp16
61
+ dtype: bfloat16
62
+ merge_method: task_arithmetic
63
+ slices:
64
+ - sources:
65
+ - layer_range: [0, 40]
66
+ model:
67
+ model:
68
+ path: TheBloke/Llama-2-13B-fp16
69
+ - layer_range: [0, 40]
70
+ model:
71
+ model:
72
+ path: Undi95/Unholy-v2-13B
73
+ parameters:
74
+ weight: 1.0
75
+ - layer_range: [0, 40]
76
+ model:
77
+ model:
78
+ path: Henk717/echidna-tiefigther-25
79
+ parameters:
80
+ weight: 0.45
81
+ - layer_range: [0, 40]
82
+ model:
83
+ model:
84
+ path: KoboldAI/LLaMA2-13B-Erebus-v3
85
+ parameters:
86
+ weight: 0.33
87
+ ```
88
+
89
+ for P2
90
+ ```yaml
91
+ base_model:
92
+ model:
93
+ path: TheBloke/Llama-2-13B-fp16
94
+ dtype: bfloat16
95
+ merge_method: task_arithmetic
96
+ slices:
97
+ - sources:
98
+ - layer_range: [0, 40]
99
+ model:
100
+ model:
101
+ path: TheBloke/Llama-2-13B-fp16
102
+ - layer_range: [0, 40]
103
+ model:
104
+ model:
105
+ path: KoboldAI/LLaMA2-13B-Psyfighter2
106
+ parameters:
107
+ weight: 1.0
108
+ - layer_range: [0, 40]
109
+ model:
110
+ model:
111
+ path: Riiid/sheep-duck-llama-2-13b
112
+ parameters:
113
+ weight: 0.45
114
+ - layer_range: [0, 40]
115
+ model:
116
+ model:
117
+ path: IkariDev/Athena-v4
118
+ parameters:
119
+ weight: 0.33
120
+ ```
121
+
122
+ for the final merge
123
  ```yaml
124
  base_model:
125
  model:
 
159
  model:
160
  model:
161
  path: TheBloke/Llama-2-13B-fp16
162
+ ```