Khetterman's picture
Update README.md
fd2f3da verified
---
base_model:
- Azazelle/MN-Halide-12b-v1.0
- benhaotang/nemo-math-science-philosophy-12B
- FallenMerick/MN-Chunky-Lotus-12B
- FallenMerick/MN-Violet-Lotus-12B
- GalrionSoftworks/Canidori-12B-v1
- GalrionSoftworks/Pleiades-12B-v1
- inflatebot/MN-12B-Mag-Mell-R1
- Nohobby/MN-12B-Siskin-v0.2
- ThijsL202/MadMix-Unleashed-12B
- Trappu/Abomination-merge-attempt-12B
- VongolaChouko/Starcannon-Unleashed-12B-v1.0
library_name: transformers
tags:
- mergekit
- merge
- bfloat16
- safetensors
- 12b
- chat
- creative
- roleplay
- conversational
- creative-writing
- not-for-all-audiences
language:
- en
- ru
---
# AbominationScience-12B-v4
>When the choice is not random.
![AbominationScienceLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/mrBCmxkidQ9KNQsRO_fOy.png)
This is an interesting merge of **11 cool models**, created using [mergekit](https://github.com/arcee-ai/mergekit).
Enjoy exploring :)
## Merge Details
### Method
This model was merged using the multistep process and remerge with some model variations for best result.
### Models
The following models were included in the merge:
* [Azazelle/MN-Halide-12b-v1.0](https://huggingface.co/Azazelle/MN-Halide-12b-v1.0)
* [benhaotang/nemo-math-science-philosophy-12B](https://huggingface.co/benhaotang/nemo-math-science-philosophy-12B)
* [FallenMerick/MN-Chunky-Lotus-12B](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B)
* [FallenMerick/MN-Violet-Lotus-12B](https://huggingface.co/FallenMerick/MN-Violet-Lotus-12B)
* [GalrionSoftworks/Canidori-12B-v1](https://huggingface.co/GalrionSoftworks/Canidori-12B-v1)
* [GalrionSoftworks/Pleiades-12B-v1](https://huggingface.co/GalrionSoftworks/Pleiades-12B-v1)
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [Nohobby/MN-12B-Siskin-v0.2](https://huggingface.co/Nohobby/MN-12B-Siskin-v0.2)
* [ThijsL202/MadMix-Unleashed-12B](https://huggingface.co/ThijsL202/MadMix-Unleashed-12B)
* [Trappu/Abomination-merge-attempt-12B](https://huggingface.co/Trappu/Abomination-merge-attempt-12B)
* [VongolaChouko/Starcannon-Unleashed-12B-v1.0](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0)
### Configuration
The following YAML configurations was used to produce this model:
```yaml
# AbominationScience
# It's a good model, I used it as a base for this merge.
models:
- model: Trappu/Abomination-merge-attempt-12B
- model: benhaotang/nemo-math-science-philosophy-12B
merge_method: slerp
base_model: Trappu/Abomination-merge-attempt-12B
dtype: bfloat16
parameters:
t: [0.8, 0.2, 0.8, 0.2, 0.8, 0.2, 0.8]
# SCUMCL
models:
- model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
- model: FallenMerick/MN-Chunky-Lotus-12B
merge_method: slerp
base_model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
dtype: bfloat16
parameters:
t: [0.7, 0.3, 0.7, 0.3, 0.7, 0.3, 0.7]
# SISMMU
models:
- model: Nohobby/MN-12B-Siskin-v0.2
- model: ThijsL202/MadMix-Unleashed-12B
merge_method: slerp
base_model: Nohobby/MN-12B-Siskin-v0.2
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
# PLECAD
models:
- model: GalrionSoftworks/Pleiades-12B-v1
- model: GalrionSoftworks/Canidori-12B-v1
merge_method: slerp
base_model: GalrionSoftworks/Pleiades-12B-v1
dtype: bfloat16
parameters:
t: [0.7, 0.3, 0.7, 0.3, 0.7, 0.3, 0.7]
# Positive-12B-v1 and Negative-12B-v1 are the basis of diversity for the base model.
# I've lost the exact config, but it was most likely a slerp like the one in SCUMCL/SISMMU/PLECAD.
# Positive-12B-v1 = SCUMCL + SISMMU.
# Negative-12B-v1 = PLECAD + AbominationScience.
# AbominationScience-12B-v2
models:
- model: F:/Positive-12B-v1
parameters:
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
- model: F:/Negative-12B-v1
parameters:
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
merge_method: dare_ties
base_model: F:/AbominationScience
dtype: bfloat16
# AbominationScience-12B-v3
# Della merge with a good base to form an interesting core
models:
- model: F:/AbominationScience
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
merge_method: della
parameters:
epsilon: 0.123456789
lambda: 0.987654321
base_model: F:/AbominationScience-12B-v2
dtype: bfloat16
# AbominationScience-12B-v4
# Final shift the model to three very good bases.
models:
- model: inflatebot/MN-12B-Mag-Mell-R1
- model: FallenMerick/MN-Violet-Lotus-12B
- model: Azazelle/MN-Halide-12b-v1.0
merge_method: model_stock
base_model: F:/AbominationScience-12B-v3
dtype: bfloat16
```
>My thanks to the authors of the original models, your work is incredible. Have a good time 🖤