![Shadow clown logo](/CorticalStack/shadow-clown-7B-dare/resolve/main/shadow_clown.png)
shadow-clown-7B-dare
shadow-clown-7B-dare is a DARE merge of the following models using mergekit:
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- CultriX/NeuralTrix-7B-dpo
- CorticalStack/neurotic-crown-clown-7b-ties
See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.
🧩 Configuration
models:
- model: yam-peleg/Experiment26-7B
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.52
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.52
weight: 0.2
- model: CorticalStack/neurotic-crown-clown-7b-ties
parameters:
density: 0.52
weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 419
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.