File size: 3,431 Bytes
8c2b1eb
d3d6834
 
 
 
 
 
77abfd2
 
 
 
 
d3d6834
 
 
 
 
77abfd2
 
 
 
d3d6834
 
 
 
 
 
 
 
 
 
 
 
 
 
7f24d40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d3d6834
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
library_name: transformers
tags:
- mergekit
- merge

---

<img src="https://huggingface.co/Casual-Autopsy/Gemma-Radiation-RP-9B/resolve/main/Gemma_Rad.png" style="display: block; margin: auto;">

ToDo: Fill the card with more info.

# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details

It's a bit of a test merge to dip my toes into merging Gemma 2.
Sadly, however, it seems like 8B is my PC's tolerable limit before performance becomes painstakingly and infuriatingly slow, so after this, I might have to sit out on Gemma 2

### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Casual-Autopsy/Gemma-Rad-RP](https://huggingface.co/Casual-Autopsy/Gemma-Rad-RP) as a base.

### Models Merged

The following models were included in the merge:
* [Casual-Autopsy/Gemma-Rad-Uncen](https://huggingface.co/Casual-Autopsy/Gemma-Rad-Uncen)
* [Casual-Autopsy/Gemma-Rad-IQ](https://huggingface.co/Casual-Autopsy/Gemma-Rad-IQ)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: crestf411/gemma2-9B-sunfall-v0.5.2
  - model: crestf411/gemma2-9B-daybreak-v0.5
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.5, 0.13, 0.5, 0.13, 0.3]
  - model: crestf411/gemstone-9b
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.13, 0.5, 0.13, 0.5, 0.13]
merge_method: dare_ties
base_model: crestf411/gemma2-9B-sunfall-v0.5.2
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
```

```yaml
models:
  - model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
  - model: nldemo/Gemma-9B-Summarizer-QLoRA
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.0625, 0.25, 0.0625, 0.25, 0.0625]
  - model: SillyTilly/google-gemma-2-9b-it+rbojja/gemma2-9b-intent-lora-adapter
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.0625, 0.25, 0.0625, 0.25, 0.0625]
  - model: nbeerbower/gemma2-gutenberg-9B
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.25, 0.0625, 0.25, 0.0625, 0.25]
merge_method: ties
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
```

```yaml
models:
  - model: IlyaGusev/gemma-2-9b-it-abliterated
  - model: TheDrummer/Smegmma-9B-v1
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.5, 0.13, 0.5, 0.13, 0.3]
  - model: TheDrummer/Tiger-Gemma-9B-v1
    parameters:
      density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
      weight: [0.13, 0.5, 0.13, 0.5, 0.13]
merge_method: dare_ties
base_model: IlyaGusev/gemma-2-9b-it-abliterated
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
```

```yaml
models:
  - model: Casual-Autopsy/Gemma-Rad-RP
  - model: Casual-Autopsy/Gemma-Rad-Uncen
  - model: Casual-Autopsy/Gemma-Rad-IQ
merge_method: model_stock
base_model: Casual-Autopsy/Gemma-Rad-RP
dtype: bfloat16
```