automerger commited on
Commit
c837d91
1 Parent(s): 0764daf

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -6,28 +6,34 @@ tags:
6
  - lazymergekit
7
  - automerger
8
  base_model:
 
9
  - mayacinka/yam-jom-7B
10
  ---
11
 
12
  # Experiment24Yam-7B
13
 
14
  Experiment24Yam-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
 
15
  * [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B)
16
 
17
  ## 🧩 Configuration
18
 
19
  ```yaml
20
- models:
21
- - model: yam-peleg/Experiment24-7B
22
- # No parameters necessary for base model
23
- - model: mayacinka/yam-jom-7B
24
- parameters:
25
- density: 0.53
26
- weight: 0.6
27
- merge_method: dare_ties
28
  base_model: yam-peleg/Experiment24-7B
29
  parameters:
30
- int8_mask: true
 
 
 
 
 
31
  dtype: bfloat16
32
  random_seed: 0
33
  ```
 
6
  - lazymergekit
7
  - automerger
8
  base_model:
9
+ - yam-peleg/Experiment24-7B
10
  - mayacinka/yam-jom-7B
11
  ---
12
 
13
  # Experiment24Yam-7B
14
 
15
  Experiment24Yam-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
16
+ * [yam-peleg/Experiment24-7B](https://huggingface.co/yam-peleg/Experiment24-7B)
17
  * [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
+ slices:
23
+ - sources:
24
+ - model: yam-peleg/Experiment24-7B
25
+ layer_range: [0, 32]
26
+ - model: mayacinka/yam-jom-7B
27
+ layer_range: [0, 32]
28
+ merge_method: slerp
 
29
  base_model: yam-peleg/Experiment24-7B
30
  parameters:
31
+ t:
32
+ - filter: self_attn
33
+ value: [0, 0.5, 0.3, 0.7, 1]
34
+ - filter: mlp
35
+ value: [1, 0.5, 0.7, 0.3, 0]
36
+ - value: 0.5
37
  dtype: bfloat16
38
  random_seed: 0
39
  ```
mergekit_config.yml CHANGED
@@ -1,15 +1,19 @@
1
 
2
- models:
3
- - model: yam-peleg/Experiment24-7B
4
- # No parameters necessary for base model
5
- - model: mayacinka/yam-jom-7B
6
- parameters:
7
- density: 0.53
8
- weight: 0.6
9
- merge_method: dare_ties
10
  base_model: yam-peleg/Experiment24-7B
11
  parameters:
12
- int8_mask: true
 
 
 
 
 
13
  dtype: bfloat16
14
  random_seed: 0
15
 
 
1
 
2
+ slices:
3
+ - sources:
4
+ - model: yam-peleg/Experiment24-7B
5
+ layer_range: [0, 32]
6
+ - model: mayacinka/yam-jom-7B
7
+ layer_range: [0, 32]
8
+ merge_method: slerp
 
9
  base_model: yam-peleg/Experiment24-7B
10
  parameters:
11
+ t:
12
+ - filter: self_attn
13
+ value: [0, 0.5, 0.3, 0.7, 1]
14
+ - filter: mlp
15
+ value: [1, 0.5, 0.7, 0.3, 0]
16
+ - value: 0.5
17
  dtype: bfloat16
18
  random_seed: 0
19
 
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3689b2d56cee481b0e285c226db42217ab1b619c1080f7e90209f2ea9d57b0b5
3
  size 9942981696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5926a3e6912c20846dcdf898782d2a712091862fe42cc2fbd36d7fee2d79bf0c
3
  size 9942981696
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:786ac362318198138921dbb0edcd0f6e61b7fd5e03d34ca08e4a6302c8ee9eb6
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb56fff2835d365556f0eaea204884338f7a27cf39b8bf35ce44dacd5d23dba
3
  size 4540516344