--- tags: - merge - mergekit - GGUF --- # neural-Kunoichi2-7B-slerp-GGUF quantized version this is the quantized version of [neural-Kunoichi2-7B-slerp](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp) ## [neural-Kunoichi2-7B-slerp](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp) neural-Kunoichi2-7B-slerp is a merge of the following models using LazyMergekit: * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [mlabonne/NeuralPipe-7B-ties](https://huggingface.co/mlabonne/NeuralPipe-7B-ties) ## 🧩 Configuration ```yaml merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: mlabonne/NeuralPipe-7B-ties layer_range: [0, 32] parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```