license: apache-2.0
tags:
- merge
- roleplay
- not-for-all-audiences
Brynhildr-34B
This is a merge of pre-trained language models created using mergekit.
4.65 exl2 version can be found here:
https://huggingface.co/ParasiticRogue/Brynhildr-34B-exl2-4.65
GGUFs provided by mradermacher:
https://huggingface.co/mradermacher/Brynhildr-34B-GGUF?not-for-all-audiences=true
Merge Details
ChatML multi-merge.
Similar recipie framework as the stew merge, with this one being less unnecessarily verbose and actually having brakes in it's output at certain instances, but with the trade-off that it's output is also more polite in structure. More of a merge done for curiosity's sake, rather then anything seriously considered.
Prompt Format: ChatML
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
Models Merged
The following models were included in the merge:
https://huggingface.co/NeverSleep/CausalLM-RP-34B
https://huggingface.co/cognitivecomputations/dolphin-2_2-yi-34b
https://huggingface.co/adamo1139/Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702
https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
Configuration
The following YAML configuration was used to produce this model:
models:
- model: CausalLM-RP-34B
parameters:
weight: 0.34
density: 0.78
- model: dolphin-2_2-yi-34b
parameters:
weight: 0.28
density: 0.66
- model: Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702
parameters:
weight: 0.22
density: 0.54
- model: Nous-Hermes-2-Yi-34B
parameters:
weight: 0.16
density: 0.42
merge_method: dare_ties
base_model: Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16