--- base_model: - alpindale/WizardLM-2-8x22B - openbmb/Eurux-8x22b-nca - fireworks-ai/mixtral-8x22b-instruct-oh - mistralai/Mixtral-8x22B-v0.1 - migtissera/Tess-2.0-Mixtral-8x22B - openbmb/Eurux-8x22b-kto library_name: transformers tags: - mergekit - merge --- Followed divinetaco's recipe to cook up this merge. DivineTaco Wizard This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B) as a base. ### Models Merged The following models were included in the merge: * [openbmb/Eurux-8x22b-nca](https://huggingface.co/openbmb/Eurux-8x22b-nca) * [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) * [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) * [migtissera/Tess-2.0-Mixtral-8x22B](https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B) * [openbmb/Eurux-8x22b-kto](https://huggingface.co/openbmb/Eurux-8x22b-kto) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: alpindale/WizardLM-2-8x22B - model: openbmb/Eurux-8x22b-kto - model: mistralai/Mixtral-8x22B-v0.1 - model: migtissera/Tess-2.0-Mixtral-8x22B - model: fireworks-ai/mixtral-8x22b-instruct-oh - model: openbmb/Eurux-8x22b-nca base_model: alpindale/WizardLM-2-8x22B merge_method: model_stock dtype: bfloat16 ```