johnsutor commited on
Commit
4402b21
1 Parent(s): 46cc269

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
  base_model:
3
- - VAGOsolutions/SauerkrautLM-Gemma-7b
4
  - google/codegemma-7b
5
  - google/gemma-7b
6
  library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
10
- license: mit
11
  ---
12
  # dare_ties
13
 
@@ -21,7 +20,6 @@ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](
21
  ### Models Merged
22
 
23
  The following models were included in the merge:
24
- * [VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b)
25
  * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)
26
 
27
  ### Configuration
@@ -31,17 +29,20 @@ The following YAML configuration was used to produce this model:
31
  ```yaml
32
  models:
33
  - model: google/gemma-7b
34
- - model: google/codegemma-7b
35
  parameters:
36
  density: 0.5
37
  weight: 0.5
38
- - model: VAGOsolutions/SauerkrautLM-Gemma-7b
39
  parameters:
40
  density: 0.5
41
  weight: 0.5
 
 
 
 
42
  merge_method: dare_ties
43
  base_model: google/gemma-7b
44
  parameters:
45
  int8_mask: true
46
  dtype: bfloat16
47
- ```
 
1
  ---
2
  base_model:
 
3
  - google/codegemma-7b
4
  - google/gemma-7b
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+
10
  ---
11
  # dare_ties
12
 
 
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
 
23
  * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)
24
 
25
  ### Configuration
 
29
  ```yaml
30
  models:
31
  - model: google/gemma-7b
 
32
  parameters:
33
  density: 0.5
34
  weight: 0.5
35
+ - model: google/codegemma-7b
36
  parameters:
37
  density: 0.5
38
  weight: 0.5
39
+ # - model: VAGOsolutions/SauerkrautLM-Gemma-7b
40
+ # parameters:
41
+ # density: 0.5
42
+ # weight: 0.5
43
  merge_method: dare_ties
44
  base_model: google/gemma-7b
45
  parameters:
46
  int8_mask: true
47
  dtype: bfloat16
48
+ ```
mergekit_config.yml CHANGED
@@ -1,13 +1,16 @@
1
  models:
2
  - model: google/gemma-7b
3
- - model: google/codegemma-7b
4
  parameters:
5
  density: 0.5
6
  weight: 0.5
7
- - model: VAGOsolutions/SauerkrautLM-Gemma-7b
8
  parameters:
9
  density: 0.5
10
  weight: 0.5
 
 
 
 
11
  merge_method: dare_ties
12
  base_model: google/gemma-7b
13
  parameters:
 
1
  models:
2
  - model: google/gemma-7b
 
3
  parameters:
4
  density: 0.5
5
  weight: 0.5
6
+ - model: google/codegemma-7b
7
  parameters:
8
  density: 0.5
9
  weight: 0.5
10
+ # - model: VAGOsolutions/SauerkrautLM-Gemma-7b
11
+ # parameters:
12
+ # density: 0.5
13
+ # weight: 0.5
14
  merge_method: dare_ties
15
  base_model: google/gemma-7b
16
  parameters:
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf1c5a8d788bdb59f65c92cffebaf68f0b2b15bb6519e3aa06155850275b2b9f
3
  size 9877792376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a097b8288a6e4df23bbb46a70958db3c070f4921c4b5a5bb324262660e3f72ba
3
  size 9877792376
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c864fab9f1cd6c7399dd891f808076618531c99cae2f507a40fb527a96fad058
3
  size 7197599000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4870cf94ad023625bc1130c5eff390677d3c61e53eef0aacfc4846c742bcf716
3
  size 7197599000