MN-12B-Kakigori / README.md
DoppelReflEx's picture
Update README.md
f565b8f verified
metadata
license: cc-by-nc-4.0
base_model:
  - cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
  - crestf411/MN-Slush
library_name: transformers
tags:
  - mergekit
  - merge

What is this?

Simple merge, I can say it's good enough to play RP, ERP, but decent.

Eval scores better than WolfFrame, but I can't tell how good is it.

Overall, very nice-to-try model. 😁

GGUF here, https://huggingface.co/mradermacher/MN-12B-Kakigori-GGUF

Imatrix here, https://huggingface.co/mradermacher/MN-12B-Kakigori-i1-GGUF

My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-Kakigori-Q6_K-GGUF

Merge Detail

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
  - model: crestf411/MN-Slush
merge_method: slerp
base_model: crestf411/MN-Slush
parameters:
  t: [0, 0.1, 0.2, 0.25, 0.25, 0.2, 0.1, 0]
dtype: bfloat16
tokenizer_source: base