--- base_model: - Riiid/sheep-duck-llama-2-13b - IkariDev/Athena-v4 - TheBloke/Llama-2-13B-fp16 - KoboldAI/LLaMA2-13B-Psyfighter2 tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base. ### Models Merged The following models were included in the merge: * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b) * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4) * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: TheBloke/Llama-2-13B-fp16 dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 40] model: model: path: TheBloke/Llama-2-13B-fp16 - layer_range: [0, 40] model: model: path: KoboldAI/LLaMA2-13B-Psyfighter2 parameters: weight: 1.0 - layer_range: [0, 40] model: model: path: Riiid/sheep-duck-llama-2-13b parameters: weight: 0.45 - layer_range: [0, 40] model: model: path: IkariDev/Athena-v4 parameters: weight: 0.33 ```