base_model: | |
- ycros/BagelMIsteryTour-v2-8x7B | |
- smelborp/MixtralOrochi8x7B | |
- cognitivecomputations/dolphin-2.7-mixtral-8x7b | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
# maid-yuzu-v7 | |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
I don't know anything about merges, so this may be a stupid method, but I was curious how the models would be merged if I took this approach. | |
## Merge Details | |
### Merge Method | |
This model was merged using the SLERP merge method. | |
This model is a model that first merges Model [Orochi](https://huggingface.co/smelborp/MixtralOrochi8x7B) with Model [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) with a 0.15 SLERP option, and then merges Model [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) with a 0.2 SLERP option based on the merged model. | |
### Models Merged | |
The following models were included in the merge: | |
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) | |
* ../maid-yuzu-v7-base | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
base_model: | |
model: | |
path: ../maid-yuzu-v7-base | |
dtype: bfloat16 | |
merge_method: slerp | |
parameters: | |
t: | |
- value: 0.2 | |
slices: | |
- sources: | |
- layer_range: [0, 32] | |
model: | |
model: | |
path: ../maid-yuzu-v7-base | |
- layer_range: [0, 32] | |
model: | |
model: | |
path: ycros/BagelMIsteryTour-v2-8x7B | |
``` | |