johnsutor commited on
Commit
f5504a1
1 Parent(s): 2cb65ad

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
  base_model:
3
- - VAGOsolutions/SauerkrautLM-Gemma-7b
4
  - google/codegemma-7b
5
  - google/gemma-7b
6
  library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
10
- license: mit
11
  ---
12
  # dare_linear
13
 
@@ -21,7 +20,6 @@ This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099)
21
  ### Models Merged
22
 
23
  The following models were included in the merge:
24
- * [VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b)
25
  * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)
26
 
27
  ### Configuration
@@ -31,17 +29,20 @@ The following YAML configuration was used to produce this model:
31
  ```yaml
32
  models:
33
  - model: google/gemma-7b
34
- - model: google/codegemma-7b
35
  parameters:
36
  density: 0.5
37
  weight: 0.5
38
- - model: VAGOsolutions/SauerkrautLM-Gemma-7b
39
  parameters:
40
  density: 0.5
41
  weight: 0.5
 
 
 
 
42
  merge_method: dare_linear
43
  base_model: google/gemma-7b
44
  parameters:
45
  int8_mask: true
46
  dtype: bfloat16
47
- ```
 
1
  ---
2
  base_model:
 
3
  - google/codegemma-7b
4
  - google/gemma-7b
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+
10
  ---
11
  # dare_linear
12
 
 
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
 
23
  * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)
24
 
25
  ### Configuration
 
29
  ```yaml
30
  models:
31
  - model: google/gemma-7b
 
32
  parameters:
33
  density: 0.5
34
  weight: 0.5
35
+ - model: google/codegemma-7b
36
  parameters:
37
  density: 0.5
38
  weight: 0.5
39
+ # - model: VAGOsolutions/SauerkrautLM-Gemma-7b
40
+ # parameters:
41
+ # density: 0.5
42
+ # weight: 0.5
43
  merge_method: dare_linear
44
  base_model: google/gemma-7b
45
  parameters:
46
  int8_mask: true
47
  dtype: bfloat16
48
+ ```
mergekit_config.yml CHANGED
@@ -1,13 +1,16 @@
1
  models:
2
  - model: google/gemma-7b
3
- - model: google/codegemma-7b
4
  parameters:
5
  density: 0.5
6
  weight: 0.5
7
- - model: VAGOsolutions/SauerkrautLM-Gemma-7b
8
  parameters:
9
  density: 0.5
10
  weight: 0.5
 
 
 
 
11
  merge_method: dare_linear
12
  base_model: google/gemma-7b
13
  parameters:
 
1
  models:
2
  - model: google/gemma-7b
 
3
  parameters:
4
  density: 0.5
5
  weight: 0.5
6
+ - model: google/codegemma-7b
7
  parameters:
8
  density: 0.5
9
  weight: 0.5
10
+ # - model: VAGOsolutions/SauerkrautLM-Gemma-7b
11
+ # parameters:
12
+ # density: 0.5
13
+ # weight: 0.5
14
  merge_method: dare_linear
15
  base_model: google/gemma-7b
16
  parameters:
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df62393ea71819c20e1b9a9bb82ae94b9c99f69d8edfd9f765a56735b527c11d
3
  size 9877792376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a097b8288a6e4df23bbb46a70958db3c070f4921c4b5a5bb324262660e3f72ba
3
  size 9877792376
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:067eb180a9cdf3943f3c5a10e5ecc88f0b127ce62be435b068377b92b21e3a5c
3
  size 7197599000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4870cf94ad023625bc1130c5eff390677d3c61e53eef0aacfc4846c742bcf716
3
  size 7197599000