Casual-Autopsy commited on
Commit
a39b653
1 Parent(s): ba87b6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +368 -6
README.md CHANGED
@@ -1,5 +1,21 @@
1
  ---
2
  base_model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
4
  - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
5
  - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
@@ -12,28 +28,374 @@ tags:
12
 
13
  ---
14
 
15
- ToDo: Model card(Inosmnia sapped my motivation, if I haven't done it by 7/12 8:00 AM EST then start a discussion and I'll get the notif.)
 
16
 
17
- # merge
18
 
19
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
 
21
  ## Merge Details
22
- ### Merge Method
23
 
24
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Casual-Autopsy/Umbral-Mind-6](https://huggingface.co/Casual-Autopsy/Umbral-Mind-6) as a base.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ### Models Merged
27
 
28
  The following models were included in the merge:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
30
  * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
31
  * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
32
  * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
33
 
34
- ### Configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
- The following YAML configuration was used to produce this model:
37
 
38
  ```yaml
39
  models:
 
1
  ---
2
  base_model:
3
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
4
+ - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
5
+ - tannedbum/L3-Nymeria-Maid-8B
6
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
7
+ - tannedbum/L3-Nymeria-8B
8
+ - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
9
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
10
+ - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
11
+ - migtissera/Llama-3-8B-Synthia-v3.5
12
+ - Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
13
+ - v000000/L3-8B-Poppy-Sunspice
14
+ - Magpie-Align/Llama-3-8B-WizardLM-196K
15
+ - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
16
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
17
+ - invisietch/EtherealRainbow-v0.3-8B
18
+ - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
19
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
20
  - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
21
  - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
 
28
 
29
  ---
30
 
31
+ <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
32
+ Image by ろ47
33
 
34
+ # Merge
35
 
36
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
37
 
38
  ## Merge Details
 
39
 
40
+ The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
41
+ - Mental illness
42
+ - Self-harm
43
+ - Trauma
44
+ - Suicide
45
+
46
+ I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes,
47
+ but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably.
48
+
49
+ If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
50
+
51
+ ### Usage Info
52
+
53
+ This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
54
+
55
+ ### Quants
56
+
57
+
58
 
59
  ### Models Merged
60
 
61
  The following models were included in the merge:
62
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
63
+ * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
64
+ * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
65
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
66
+ * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
67
+ * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B)
68
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
69
+ * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
70
+ * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
71
+ * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
72
+ * [v000000/L3-8B-Poppy-Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice)
73
+ * [Magpie-Align/Llama-3-8B-WizardLM-196K](https://huggingface.co/Magpie-Align/Llama-3-8B-WizardLM-196K)
74
+ * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B)
75
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
76
+ * [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
77
+ * [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)
78
  * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
79
  * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
80
  * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
81
  * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
82
 
83
+ ## Secret Sauce
84
+
85
+ The following YAML configurations were used to produce this model:
86
+
87
+ ### Umbral-Mind-1-pt.1
88
+
89
+ ```yaml
90
+ models:
91
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
92
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
93
+ parameters:
94
+ density: 0.5
95
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
96
+ - model: tannedbum/L3-Nymeria-Maid-8B
97
+ parameters:
98
+ density: 0.5
99
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
100
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
101
+ parameters:
102
+ density: 0.5
103
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
104
+ - model: tannedbum/L3-Nymeria-8B
105
+ parameters:
106
+ density: 0.5
107
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
108
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
109
+ parameters:
110
+ density: 0.5
111
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
112
+ merge_method: dare_ties
113
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
114
+ parameters:
115
+ normalize: false
116
+ int8_mask: true
117
+ dtype: bfloat16
118
+ ```
119
+
120
+ ### Umbral-Mind-1-pt.2
121
+
122
+ ```yaml
123
+ models:
124
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
125
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
126
+ parameters:
127
+ gamma: 0.01
128
+ density: 0.9
129
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
130
+ - model: tannedbum/L3-Nymeria-Maid-8B
131
+ parameters:
132
+ gamma: 0.01
133
+ density: 0.9
134
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
135
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
136
+ parameters:
137
+ gamma: 0.01
138
+ density: 0.9
139
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
140
+ - model: tannedbum/L3-Nymeria-8B
141
+ parameters:
142
+ gamma: 0.01
143
+ density: 0.9
144
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
145
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
146
+ parameters:
147
+ gamma: 0.01
148
+ density: 0.9
149
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
150
+ merge_method: breadcrumbs_ties
151
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
152
+ parameters:
153
+ normalize: false
154
+ int8_mask: true
155
+ dtype: bfloat16
156
+ ```
157
+
158
+ ### Umbral-Mind-1
159
+
160
+ ```yaml
161
+ models:
162
+ - model: Casual-Autopsy/Umbral-Mind-1-pt.1
163
+ - model: Casual-Autopsy/Umbral-Mind-1-pt.2
164
+ merge_method: slerp
165
+ base_model: Casual-Autopsy/Umbral-Mind-1-pt.1
166
+ parameters:
167
+ t:
168
+ - filter: self_attn
169
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
170
+ - filter: mlp
171
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
172
+ - value: 0.5
173
+ dtype: bfloat16
174
+ ```
175
+
176
+ ### Umbral-Mind-2-pt.1
177
+
178
+ ```yaml
179
+ models:
180
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
181
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
182
+ parameters:
183
+ density: 0.5
184
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
185
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
186
+ parameters:
187
+ density: 0.5
188
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
189
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
190
+ parameters:
191
+ density: 0.5
192
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
193
+ - model: v000000/L3-8B-Poppy-Sunspice
194
+ parameters:
195
+ density: 0.5
196
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
197
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
198
+ parameters:
199
+ density: 0.5
200
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
201
+ merge_method: dare_ties
202
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
203
+ parameters:
204
+ normalize: false
205
+ int8_mask: true
206
+ dtype: bfloat16
207
+ ```
208
+
209
+ ### Umbral-Mind-2-pt.2
210
+
211
+ ```yaml
212
+ models:
213
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
214
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
215
+ parameters:
216
+ gamma: 0.01
217
+ density: 0.9
218
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
219
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
220
+ parameters:
221
+ gamma: 0.01
222
+ density: 0.9
223
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
224
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
225
+ parameters:
226
+ gamma: 0.01
227
+ density: 0.9
228
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
229
+ - model: Magpie-Align/Llama-3-8B-WizardLM-196K
230
+ parameters:
231
+ gamma: 0.01
232
+ density: 0.9
233
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
234
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
235
+ parameters:
236
+ gamma: 0.01
237
+ density: 0.9
238
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
239
+ merge_method: breadcrumbs_ties
240
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
241
+ parameters:
242
+ normalize: false
243
+ int8_mask: true
244
+ dtype: bfloat16
245
+ ```
246
+
247
+ ### Umbral-Mind-2
248
+
249
+ ```yaml
250
+ models:
251
+ - model: Casual-Autopsy/Umbral-Mind-2-pt.1
252
+ - model: Casual-Autopsy/Umbral-Mind-2-pt.2
253
+ merge_method: slerp
254
+ base_model: Casual-Autopsy/Umbral-Mind-2-pt.1
255
+ parameters:
256
+ t:
257
+ - filter: self_attn
258
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
259
+ - filter: mlp
260
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
261
+ - value: 0.5
262
+ dtype: bfloat16
263
+ ```
264
+
265
+ ### Umbral-Mind-3-pt.1
266
+
267
+ ```yaml
268
+ models:
269
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
270
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
271
+ parameters:
272
+ density: 0.5
273
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
274
+ - model: invisietch/EtherealRainbow-v0.3-8B
275
+ parameters:
276
+ density: 0.5
277
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
278
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
279
+ parameters:
280
+ density: 0.5
281
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
282
+ - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
283
+ parameters:
284
+ density: 0.5
285
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
286
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
287
+ parameters:
288
+ density: 0.5
289
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
290
+ merge_method: dare_ties
291
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
292
+ parameters:
293
+ normalize: false
294
+ int8_mask: true
295
+ dtype: bfloat16
296
+ ```
297
+
298
+ ### Umbral-Mind-3-pt.2
299
+
300
+ ```yaml
301
+ models:
302
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
303
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
304
+ parameters:
305
+ gamma: 0.01
306
+ density: 0.9
307
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
308
+ - model: invisietch/EtherealRainbow-v0.3-8B
309
+ parameters:
310
+ gamma: 0.01
311
+ density: 0.9
312
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
313
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
314
+ parameters:
315
+ gamma: 0.01
316
+ density: 0.9
317
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
318
+ - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
319
+ parameters:
320
+ gamma: 0.01
321
+ density: 0.9
322
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
323
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
324
+ parameters:
325
+ gamma: 0.01
326
+ density: 0.9
327
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
328
+ merge_method: breadcrumbs_ties
329
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
330
+ parameters:
331
+ normalize: false
332
+ int8_mask: true
333
+ dtype: bfloat16
334
+ ```
335
+
336
+ ### Umbral-Mind-3
337
+
338
+ ```yaml
339
+ models:
340
+ - model: Casual-Autopsy/Umbral-Mind-3-pt.1
341
+ - model: Casual-Autopsy/Umbral-Mind-3-pt.2
342
+ merge_method: slerp
343
+ base_model: Casual-Autopsy/Umbral-Mind-3-pt.1
344
+ parameters:
345
+ t:
346
+ - filter: self_attn
347
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
348
+ - filter: mlp
349
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
350
+ - value: 0.5
351
+ dtype: bfloat16
352
+ ```
353
+
354
+ ### Umbral-Mind-4
355
+
356
+ ```yaml
357
+ models:
358
+ - model: Casual-Autopsy/Umbral-Mind-1
359
+ - model: Casual-Autopsy/Umbral-Mind-3
360
+ merge_method: slerp
361
+ base_model: Casual-Autopsy/Umbral-Mind-1
362
+ parameters:
363
+ t:
364
+ - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
365
+ dtype: bfloat16
366
+ ```
367
+
368
+ ### Umbral-Mind-5
369
+
370
+ ```yaml
371
+ models:
372
+ - model: Casual-Autopsy/Umbral-Mind-4
373
+ - model: Casual-Autopsy/Umbral-Mind-2
374
+ merge_method: slerp
375
+ base_model: Casual-Autopsy/Umbral-Mind-4
376
+ parameters:
377
+ t:
378
+ - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
379
+ embed_slerp: true
380
+ dtype: bfloat16
381
+ ```
382
+
383
+ ### Umbral-Mind-6
384
+
385
+ ```yaml
386
+ models:
387
+ - model: mergekit-community/Umbral-Mind-5
388
+ - model: Casual-Autopsy/Mopey-Omelette
389
+ merge_method: slerp
390
+ base_model: mergekit-community/Umbral-Mind-5
391
+ parameters:
392
+ t:
393
+ - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
394
+ embed_slerp: true
395
+ dtype: bfloat16
396
+ ```
397
 
398
+ ### Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
399
 
400
  ```yaml
401
  models: