license: apache-2.0 | |
tags: | |
- merge | |
- mergekit | |
- lazymergekit | |
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp | |
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1 | |
base_model: | |
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp | |
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1 | |
# DDPOO-7B-slerp | |
DDPOO-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): | |
* [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) | |
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) | |
## 🧩 Configuration | |
```yaml | |
slices: | |
- sources: | |
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp | |
layer_range: [0, 32] | |
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 | |
layer_range: [0, 32] | |
merge_method: slerp | |
base_model: CultriX/MistralTrix-v1 | |
parameters: | |
t: | |
- filter: self_attn | |
value: [0, 0.5, 0.3, 0.7, 1] | |
- filter: mlp | |
value: [1, 0.5, 0.7, 0.3, 0] | |
- value: 0.5 | |
dtype: bfloat16 | |
``` |