--- base_model: [] tags: - mergekit - merge --- # ninja This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Kunoichi-7B as a base. ### Models Merged The following models were included in the merge: * notus-7b-v1 * openchat_3.5 * Loyal-Macaroni-Maid-7B + some ERP LoRA I made ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Kunoichi-7B - model: Loyal-Macaroni-Maid-7B parameters: density: 0.53 weight: 0.7 - model: notus-7b-v1 parameters: density: 0.53 weight: 0.2 - model: openchat_3.5 parameters: density: 0.53 weight: 0.1 merge_method: dare_ties base_model: Kunoichi-7B parameters: int8_mask: true tokenizer_source: union dtype: bfloat16 ```