File size: 2,013 Bytes
a5cb596
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91

---

base_model:
- princeton-nlp/gemma-2-9b-it-SimPO
- TheDrummer/Gemmasutra-9B-v1
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- sillytavern
- gemma2
language:
- en

---

![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)

# QuantFactory/Ellaria-9B-GGUF
This is quantized version of [tannedbum/Ellaria-9B](https://huggingface.co/tannedbum/Ellaria-9B) created using llama.cpp

# Original Model Card


Same reliable approach as before. A good RP model and a suitable dose of SimPO are a match made in heaven.

## SillyTavern

## Text Completion presets
```
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
```
## Advanced Formatting


Context & Instruct Presets for Gemma [Here](https://huggingface.co/tannedbum/ST-Presets/tree/main) IMPORTANT !

Instruct Mode: Enabled






This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
* [TheDrummer/Gemmasutra-9B-v1](https://huggingface.co/TheDrummer/Gemmasutra-9B-v1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: TheDrummer/Gemmasutra-9B-v1
        layer_range: [0, 42]
      - model: princeton-nlp/gemma-2-9b-it-SimPO
        layer_range: [0, 42]
merge_method: slerp
base_model: TheDrummer/Gemmasutra-9B-v1
parameters:
  t:
    - filter: self_attn
      value: [0.2, 0.4, 0.6, 0.2, 0.4]
    - filter: mlp
      value: [0.8, 0.6, 0.4, 0.8, 0.6]
    - value: 0.4
dtype: bfloat16


```

Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum