--- base_model: - Nitral-AI/Nera_Noctis-12B - Nitral-AI/Captain-Eris-Diogenes_Twilight-V0.420-12B library_name: transformers tags: - mergekit - merge --- # ST Example: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/QJwYuz7Mo5Niywo9iQ1eR.png) # Prompt format: ChatML ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### The following models were included in the merge: * [Nitral-AI/Nera_Noctis-12B](https://huggingface.co/Nitral-AI/Nera_Noctis-12B) * [Nitral-AI/Captain-Eris-Diogenes_Twilight-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris-Diogenes_Twilight-V0.420-12B) ### The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Nera_Noctis-12B layer_range: [0, 40] - model: Nitral-AI/Captain-Eris-Diogenes_Twilight-V0.420-12B layer_range: [0, 40] merge_method: slerp base_model: Nitral-AI/Nera_Noctis-12B parameters: t: - filter: self_attn value: [0, 0.4, 0.2, 0.6, 0.9] - filter: mlp value: [1, 0.6, 0.8, 0.4, 0.1] - value: 0.4206911 dtype: bfloat16 ```