--- base_model: - Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP - nbeerbower/llama-3-dragonmaid-8B - kuotient/Meta-Llama-3-8B-Instruct - Locutusque/llama-3-neural-chat-v1-8b - Undi95/Llama-3-Unholy-8B-e4 - openlynn/Llama-3-Soliloquy-8B library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP](https://huggingface.co/Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP) * [nbeerbower/llama-3-dragonmaid-8B](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) * [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b) * [Undi95/Llama-3-Unholy-8B-e4](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) * [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: kuotient/Meta-Llama-3-8B-Instruct dtype: bfloat16 merge_method: model_stock slices: - sources: - layer_range: [0, 32] model: Undi95/Llama-3-Unholy-8B-e4 - layer_range: [0, 32] model: nbeerbower/llama-3-dragonmaid-8B - layer_range: [0, 32] model: openlynn/Llama-3-Soliloquy-8B - layer_range: [0, 32] model: Locutusque/llama-3-neural-chat-v1-8b - layer_range: [0, 32] model: Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP - layer_range: [0, 32] model: kuotient/Meta-Llama-3-8B-Instruct ```