--- base_model: - Virt-io/Llama-3-8B-Irene-v0.2 - saishf/Kitty-Cat-SOVL-8B-L3-V1 - saishf/SOVLish-Maid-L3-8B library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE --- > [!WARNING] > This merge is sad, feels like a downgrade.
> Mixed bag, sometimes great other times meh not bad but cursed with 'barely above a whisper'(might be my card)
> Follows instructions pretty well
# Llama-3-8B-Irene-v0.3 Same idea as previous merge, but with [saishf's](https://huggingface.co/saishf) SOVL merge line. --- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Virt-io/Llama-3-8B-Irene-v0.2](https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.2) * [saishf/SOVLish-Maid-L3-8B](https://huggingface.co/saishf/SOVLish-Maid-L3-8B) * [saishf/Kitty-Cat-SOVL-8B-L3-V1](https://huggingface.co/saishf/Kitty-Cat-SOVL-8B-L3-V1) * Mergekit/Neko-Maid-SLERP ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Virt-io/Llama-3-8B-Irene-v0.2 layer_range: [0, 32] - model: Mergekit/Neko-Maid-SLERP layer_range: [0, 32] merge_method: slerp base_model: Virt-io/Llama-3-8B-Irene-v0.2 parameters: t: - value: [0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.35, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55] dtype: bfloat16 ``` ```yaml slices: - sources: - model: saishf/Kitty-Cat-SOVL-8B-L3-V1 layer_range: [0, 32] - model: saishf/SOVLish-Maid-L3-8B layer_range: [0, 32] merge_method: slerp base_model: saishf/Kitty-Cat-SOVL-8B-L3-V1 parameters: t: - value: [0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.35, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55] dtype: bfloat16 ```