File size: 1,149 Bytes
242fffe
66dead3
242fffe
 
 
 
 
 
 
 
 
 
a26ce18
242fffe
cbf026e
 
 
242fffe
a26ce18
3d2ef32
 
 
 
 
242fffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d2ef32
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: cc-by-nc-4.0
base_model:
- crestf411/MN-Slush
- DoppelReflEx/MN-12B-Mimicore-WhiteSnake
library_name: transformers
tags:
- mergekit
- merge

---

# What is this?

Defective version of WolFrame, it's confuse between {{user}} and {{char}} sometimes, cause me a lots of trouble.

## Why this model have a far better eval scores than original WolFrame???

GGUF? https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-4-GGUF was previous name of this model

<details>
  <summary>Merge Detail</summary>
  <p>
    ### Models Merged

The following models were included in the merge:
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
* [DoppelReflEx/MN-12B-Mimicore-WhiteSnake](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake
  - model: crestf411/MN-Slush
merge_method: slerp
base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake
parameters:
  t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
dtype: bfloat16
tokenizer_source: base

```

  </p>
</details>