--- license: apache-2.0 tags: - merge - mergekit - bardsai/jaskier-7b-dpo-v5.6 - mlabonne/AlphaMonarch-7B - mlabonne/NeuralMonarch-7B - macadeliccc/MBX-7B-v3-DPO --- Pastiche crown clown logo # pastiche-crown-clown-7B-dare pastiche-crown-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [mlabonne/NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B) * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method. ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 - model: mlabonne/AlphaMonarch-7B parameters: density: 0.53 weight: 0.2 - model: mlabonne/NeuralMonarch-7B parameters: density: 0.53 weight: 0.4 - model: macadeliccc/MBX-7B-v3-DPO parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: int8_mask: true dtype: bfloat16 ```