--- base_model: - Gaivoronsky/Mistral-7B-Saiga - snorkelai/Snorkel-Mistral-PairRM-DPO - OpenBuddy/openbuddy-mistral2-7b-v20.3-32k - meta-math/MetaMath-Mistral-7B - HuggingFaceH4/mistral-7b-grok - NousResearch/Yarn-Mistral-7b-128k - ajibawa-2023/Code-Mistral-7B - HuggingFaceH4/mistral-7b-anthropic - SherlockAssistant/Mistral-7B-Instruct-Ukrainian library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Gaivoronsky/Mistral-7B-Saiga](https://huggingface.co/Gaivoronsky/Mistral-7B-Saiga) * [snorkelai/Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO) * [OpenBuddy/openbuddy-mistral2-7b-v20.3-32k](https://huggingface.co/OpenBuddy/openbuddy-mistral2-7b-v20.3-32k) * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) * [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok) * [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) * [ajibawa-2023/Code-Mistral-7B](https://huggingface.co/ajibawa-2023/Code-Mistral-7B) * [HuggingFaceH4/mistral-7b-anthropic](https://huggingface.co/HuggingFaceH4/mistral-7b-anthropic) * [SherlockAssistant/Mistral-7B-Instruct-Ukrainian](https://huggingface.co/SherlockAssistant/Mistral-7B-Instruct-Ukrainian) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Gaivoronsky/Mistral-7B-Saiga layer_range: [0, 32] - sources: - model: HuggingFaceH4/mistral-7b-grok layer_range: [24, 32] - sources: - model: HuggingFaceH4/mistral-7b-anthropic layer_range: [24, 32] - sources: - model: NousResearch/Yarn-Mistral-7b-128k layer_range: [26, 32] - sources: - model: snorkelai/Snorkel-Mistral-PairRM-DPO layer_range: [26, 32] - sources: - model: OpenBuddy/openbuddy-mistral2-7b-v20.3-32k layer_range: [26, 32] - sources: - model: meta-math/MetaMath-Mistral-7B layer_range: [28, 32] - sources: - model: ajibawa-2023/Code-Mistral-7B layer_range: [28, 32] - sources: - model: SherlockAssistant/Mistral-7B-Instruct-Ukrainian layer_range: [30, 32] merge_method: passthrough dtype: bfloat16 ```