--- base_model: - nbeerbower/mistral-nemo-gutenberg-12B-v2 - nbeerbower/SmolNemo-12B-FFT-experimental - nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental - nbeerbower/Mistral-Nemo-Prism-12B-v2 library_name: transformers tags: - mergekit - merge --- # Nemo-Loony-12B-experimental This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nbeerbower/mistral-nemo-gutenberg-12B-v2](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v2) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/SmolNemo-12B-FFT-experimental](https://huggingface.co/nbeerbower/SmolNemo-12B-FFT-experimental) * [nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental](https://huggingface.co/nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental) * [nbeerbower/Mistral-Nemo-Prism-12B-v2](https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental parameters: weight: 1 - model: nbeerbower/SmolNemo-12B-FFT-experimental parameters: weight: 1 - model: nbeerbower/Mistral-Nemo-Prism-12B-v2 parameters: weight: 1 merge_method: ties base_model: nbeerbower/mistral-nemo-gutenberg-12B-v2 parameters: normalize: true int8_mask: true dtype: bfloat16 ```