--- base_model: - SanjiWatsuki/Kunoichi-7B - cookinai/Valkyrie-V1 library_name: transformers tags: - mergekit - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bbcee1320702b1043ef8ae/9OPS0wrdkzksmyuM6Nxdu.png) MaidenlessNoMore-7B was my first attempt at merging an LLM I decided to use one of the first models I really enjoyed that not many people know of: https://huggingface.co/cookinai/Valkyrie-V1 with my other favorite model that has been my fallback model for a long time: https://huggingface.co/SanjiWatsuki/Kunoichi-7B This was more of an experiment than anything else. Hopefully this will lead to some more interesting merges and who knows what else in the future. I mean we have to start somewhere right? Alpaca or Alpaca roleplay is recommended. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [cookinai/Valkyrie-V1](https://huggingface.co/cookinai/Valkyrie-V1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 32] - model: cookinai/Valkyrie-V1 layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```