Casual-Autopsy commited on
Commit
21f89b9
1 Parent(s): af48ac0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -12
README.md CHANGED
@@ -1,6 +1,13 @@
1
  ---
2
  base_model:
3
- - Casual-Autopsy/Umbral-Mind-2
 
 
 
 
 
 
 
4
  - Sao10K/L3-8B-Stheno-v3.1
5
  - cgato/L3-TheSpice-8b-v0.8.3
6
  - ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
@@ -11,20 +18,43 @@ library_name: transformers
11
  tags:
12
  - mergekit
13
  - merge
14
-
 
 
 
 
 
 
 
15
  ---
16
- # merge
17
 
18
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
 
20
- ## Merge Details
21
- ### Merge Method
 
 
 
 
 
 
 
22
 
23
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Casual-Autopsy/Umbral-Mind-2](https://huggingface.co/Casual-Autopsy/Umbral-Mind-2) as a base.
24
 
25
- ### Models Merged
26
 
27
  The following models were included in the merge:
 
 
 
 
 
 
 
 
 
28
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
29
  * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
30
  * [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
@@ -32,13 +62,107 @@ The following models were included in the merge:
32
  * [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0)
33
  * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
34
 
35
- ### Configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- The following YAML configuration was used to produce this model:
 
 
38
 
39
  ```yaml
40
  models:
41
- - model: Casual-Autopsy/Umbral-Mind-2
42
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
43
  parameters:
44
  weight: 0.07
@@ -58,6 +182,6 @@ models:
58
  parameters:
59
  weight: 0.02
60
  merge_method: task_arithmetic
61
- base_model: Casual-Autopsy/Umbral-Mind-2
62
  dtype: bfloat16
63
- ```
 
1
  ---
2
  base_model:
3
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
4
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
5
+ - Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B
6
+ - migtissera/Llama-3-8B-Synthia-v3.5
7
+ - tannedbum/L3-Nymeria-Maid-8B
8
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
9
+ - ChaoticNeutrals/Hathor_RP-v.01-L3-8B
10
+ - tannedbum/L3-Nymeria-8B
11
  - Sao10K/L3-8B-Stheno-v3.1
12
  - cgato/L3-TheSpice-8b-v0.8.3
13
  - ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
 
18
  tags:
19
  - mergekit
20
  - merge
21
+ - not-for-all-audiences
22
+ - nsfw
23
+ - rp
24
+ - roleplay
25
+ - role-play
26
+ language:
27
+ - en
28
+ pipeline_tag: text-generation
29
  ---
30
+ # L3-Uncen-Merger-Omelette-RP-EXPERIMENTAL
31
 
32
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
 
34
+ # Merge Details
35
+
36
+ An expiremental merger inspired by the merger recipe of [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) combined with a merger technique known as merge densification( [grimjim/kunoichi-lemon-royale-v3-32K-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-v3-32K-7B) )
37
+
38
+ The model recipe ended up being something I can only describe as making an omelette. Hence the model name.
39
+
40
+ The models are scrambled with Dare Ties to induce a bit of randomness, then the Dare Ties merges are merged into themselves with SLERP to repair any holes cause by Dare Ties, and finally a bunch of high creativity models are thrown into the merger through merge densification(Task Arithmetic).
41
+
42
+ ## Merge Method
43
 
44
+ Dare Ties, SLERP, and Task Arithmetic
45
 
46
+ ## Models Merged
47
 
48
  The following models were included in the merge:
49
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
50
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
51
+ * [Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B](https://huggingface.co/Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B)
52
+ * An unreleased Psycology merger of mine
53
+ * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
54
+ * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
55
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
56
+ * [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B)
57
+ * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
58
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
59
  * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
60
  * [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
 
62
  * [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0)
63
  * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
64
 
65
+ # Secret Sauce
66
+
67
+ The following YAML configurations was used to produce this model:
68
+
69
+ ## Scrambled-Egg-1
70
+
71
+ ```yaml
72
+ models:
73
+ - model: Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B
74
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
75
+ parameters:
76
+ density: 0.45
77
+ weight: 0.33
78
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
79
+ parameters:
80
+ density: 0.75
81
+ weight: 0.33
82
+ merge_method: dare_ties
83
+ base_model: Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B
84
+ parameters:
85
+ int8_mask: true
86
+ dtype: bfloat16
87
+ ```
88
+
89
+ ## Scrambled-Egg-2
90
+
91
+ ```yaml
92
+ models:
93
+ - model: [An unreleased psychology merger of mine]
94
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
95
+ parameters:
96
+ density: 0.35
97
+ weight: 0.25
98
+ - model: tannedbum/L3-Nymeria-Maid-8B
99
+ parameters:
100
+ density: 0.65
101
+ weight: 0.25
102
+ merge_method: dare_ties
103
+ base_model: [An unreleased psychology merger of mine]
104
+ parameters:
105
+ int8_mask: true
106
+ dtype: bfloat16
107
+ ```
108
+
109
+ ## Scrambled-Egg-3
110
+
111
+ ```yaml
112
+ models:
113
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
114
+ - model: tannedbum/L3-Nymeria-8B
115
+ parameters:
116
+ density: 0.5
117
+ weight: 0.35
118
+ - model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
119
+ parameters:
120
+ density: 0.4
121
+ weight: 0.2
122
+ merge_method: dare_ties
123
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
124
+ parameters:
125
+ int8_mask: true
126
+ dtype: bfloat16
127
+ ```
128
+
129
+ ## Omelette-1
130
+
131
+ ```yaml
132
+ models:
133
+ - model: Casual-Autopsy/Scrambled-Egg-1
134
+ - model: Casual-Autopsy/Scrambled-Egg-3
135
+ merge_method: slerp
136
+ base_model: Casual-Autopsy/Scrambled-Egg-1
137
+ parameters:
138
+ t:
139
+ - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
140
+ embed_slerp: true
141
+ dtype: bfloat16
142
+
143
+ ```
144
+
145
+ ## Omelette-2
146
+
147
+ ```yaml
148
+ models:
149
+ - model: Casual-Autopsy/Omelette-1
150
+ - model: Casual-Autopsy/Scrambled-Egg-2
151
+ merge_method: slerp
152
+ base_model: Casual-Autopsy/Omelette-1
153
+ parameters:
154
+ t:
155
+ - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
156
+ embed_slerp: true
157
+ dtype: bfloat16
158
 
159
+ ```
160
+
161
+ ## L3-Uncen-Merger-Omelette-8B-EXPERIMENTAL
162
 
163
  ```yaml
164
  models:
165
+ - model: Casual-Autopsy/Omelette-2
166
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
167
  parameters:
168
  weight: 0.07
 
182
  parameters:
183
  weight: 0.02
184
  merge_method: task_arithmetic
185
+ base_model: Casual-Autopsy/Omelette-2
186
  dtype: bfloat16
187
+ ```