--- base_model: - emnakamura/llama-3-MagicDolphin-8B - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - flammenai/Mahou-1.2-llama3-8B - nbeerbower/llama-3-spicy-abliterated-stella-8B - Nitral-AI/Hathor_Stable-v0.2-L3-8B - zeroblu3/NeuralPoppy-EVO-L3-8B - hf-100/Llama-3-Spellbound-Instruct-8B-0.3 - NousResearch/Meta-Llama-3-8B library_name: transformers tags: - mergekit - merge license: llama3 --- # Lllama-3-RedElixir-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [emnakamura/llama-3-MagicDolphin-8B](https://huggingface.co/emnakamura/llama-3-MagicDolphin-8B) * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [flammenai/Mahou-1.2-llama3-8B](https://huggingface.co/flammenai/Mahou-1.2-llama3-8B) * [nbeerbower/llama-3-spicy-abliterated-stella-8B](https://huggingface.co/nbeerbower/llama-3-spicy-abliterated-stella-8B) * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B) * [zeroblu3/NeuralPoppy-EVO-L3-8B](https://huggingface.co/zeroblu3/NeuralPoppy-EVO-L3-8B) * [hf-100/Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: NousResearch/Meta-Llama-3-8B dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 slices: - sources: - layer_range: [0, 32] model: NousResearch/Meta-Llama-3-8B - layer_range: [0, 32] model: nbeerbower/llama-3-spicy-abliterated-stella-8B parameters: density: 0.6 weight: 0.22 - layer_range: [0, 32] model: flammenai/Mahou-1.2-llama3-8B parameters: density: 0.6 weight: 0.22 - layer_range: [0, 32] model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 parameters: density: 0.58 weight: 0.14 - layer_range: [0, 32] model: zeroblu3/NeuralPoppy-EVO-L3-8B parameters: density: 0.58 weight: 0.14 - layer_range: [0, 32] model: Nitral-AI/Hathor_Stable-v0.2-L3-8B parameters: density: 0.56 weight: 0.1 - layer_range: [0, 32] model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot parameters: density: 0.56 weight: 0.1 - layer_range: [0, 32] model: emnakamura/llama-3-MagicDolphin-8B parameters: density: 0.55 weight: 0.08 ```