--- base_model: - grimjim/magnum-consolidatum-v1-12b - grimjim/mistralai-Mistral-Nemo-Instruct-2407 library_name: transformers tags: - mergekit - merge pipeline_tag: text-generation license: apache-2.0 --- # Magnolia-v1-12B This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Instruct was added in at low weight in order to increase the steerability of the model; safety has consequently been reinforced. Tested at temperature 0.7 and minP 0.01, with ChatML prompting. Mistral Nemo models tend to have repetition issues in general. For this model at least, this can be mitigated somewhat with additional sysprompting, e.g.: ``` Avoid redundant phrasing and maintain forward narrative progression by utilizing varied sentence structure, alternative word choices, and active voice. Employ descriptive details judiciously, ensuring they serve a purpose in advancing the story or revealing character. ``` ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [grimjim/magnum-consolidatum-v1-12b](https://huggingface.co/grimjim/magnum-consolidatum-v1-12b) * [grimjim/mistralai-Mistral-Nemo-Instruct-2407](https://huggingface.co/grimjim/mistralai-Mistral-Nemo-Instruct-2407) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: grimjim/mistralai-Mistral-Nemo-Instruct-2407 - model: grimjim/magnum-consolidatum-v1-12b merge_method: slerp base_model: grimjim/mistralai-Mistral-Nemo-Instruct-2407 parameters: t: - value: 0.1 dtype: bfloat16 ```