What is this?

Previous name was WhiteSnake-V2, but the eval scores is not good, so I decide to rename it. Very good in creative writing and RP, ERP. Not good in Math.

It's main goal is to break the origin WhiteSnake in eval and real usecase, but nothing too good, just decent.

GGUF, thank mradermacher a lots: https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-GGUF

My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-WolFrame-Q6_K-GGUF

Merge Details

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
 - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
   parameters:
     density: 0.9
     weight: 1
 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
   parameters:
     density: 0.6
     weight: 0.8
 - model: crestf411/MN-Slush
   parameters:
     density: 0.7
     weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base

Downloads last month
45
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for DoppelReflEx/MN-12B-WolFrame

Spaces using DoppelReflEx/MN-12B-WolFrame 4

Collection including DoppelReflEx/MN-12B-WolFrame