metadata
base_model:
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- Casual-Autopsy/Umbral-Mind-6
- ResplendentAI/Nymph_8B
library_name: transformers
tags:
- mergekit
- merge
ToDo: Model card(Inosmnia sapped my motivation, if I haven't by 7/12 8:00 AM EST then start a discussion and I'll get the notif.)
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using Casual-Autopsy/Umbral-Mind-6 as a base.
Models Merged
The following models were included in the merge:
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- ResplendentAI/Nymph_8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Casual-Autopsy/Umbral-Mind-6
- model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
parameters:
weight: [0.02, -0.01, -0.01, 0.02]
- model: ResplendentAI/Nymph_8B
parameters:
weight: [-0.01, 0.02, 0.02, -0.01]
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
weight: [-0.01, 0.02, 0.02, -0.01]
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
parameters:
weight: [0.02, -0.01, -0.01, 0.02]
merge_method: task_arithmetic
base_model: Casual-Autopsy/Umbral-Mind-6
parameters:
normalize: false
dtype: bfloat16