--- base_model: - Nitral-AI/Poppy_Porpoise-1.4-L3-8B - maldv/badger-kappa-llama-3-8b - Sao10K/L3-8B-Stheno-v3.2 - openlynn/Llama-3-Soliloquy-8B-v2 - failspy/Llama-3-8B-Instruct-MopeyMule - Hastagaras/Jamet-8B-L3-MK.II library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the breadcrumbs_ties merge method using [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) as a base. ### Models Merged The following models were included in the merge: * [Nitral-AI/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.4-L3-8B) * [maldv/badger-kappa-llama-3-8b](https://huggingface.co/maldv/badger-kappa-llama-3-8b) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) * [Hastagaras/Jamet-8B-L3-MK.II](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.II) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: failspy/Llama-3-8B-Instruct-MopeyMule - model: maldv/badger-kappa-llama-3-8b # 7/10 parameters: density: 0.4 weight: 0.14 - model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B # 7/10 parameters: density: 0.5 weight: 0.18 - model: openlynn/Llama-3-Soliloquy-8B-v2 # 8/10 parameters: density: 0.5 weight: 0.18 - model: Hastagaras/Jamet-8B-L3-MK.II # 6/10 parameters: density: 0.3 weight: 0.1 - model: Sao10K/L3-8B-Stheno-v3.2 # 9/10 parameters: density: 0.6 weight: 0.23 merge_method: breadcrumbs_ties base_model: failspy/Llama-3-8B-Instruct-MopeyMule parameters: normalize: false rescale: true gamma: 0.01 dtype: float16 ```