Cesco2004 commited on
Commit
44fffab
1 Parent(s): aa01716

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -47
README.md CHANGED
@@ -1,47 +1,47 @@
1
- ---
2
- base_model:
3
- - AurelPx/Percival_01-7b-slerp
4
- - yam-peleg/Experiment26-7B
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
10
- ---
11
- # merge
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the SLERP merge method.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
24
- * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- slices:
32
- - sources:
33
- - model: AurelPx/Percival_01-7b-slerp
34
- layer_range: [0, 32]
35
- - model: yam-peleg/Experiment26-7B
36
- layer_range: [0, 32]
37
- merge_method: slerp # This should not be indented under 'sources'
38
- base_model: yam-peleg/Experiment26-7B
39
- parameters:
40
- t:
41
- - filter: self_attn
42
- value: [0, 0.5, 0.3, 0.7, 1]
43
- - filter: mlp
44
- value: [1, 0.5, 0.7, 0.3, 0]
45
- - value: 0.5
46
- dtype: bfloat16
47
- ```
 
1
+ ---
2
+ base_model:
3
+ - AurelPx/Percival_01-7b-slerp
4
+ - yam-peleg/Experiment26-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: apache-2.0
10
+ ---
11
+ # merge
12
+
13
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
+
15
+ ## Merge Details
16
+ ### Merge Method
17
+
18
+ This model was merged using the SLERP merge method.
19
+
20
+ ### Models Merged
21
+
22
+ The following models were included in the merge:
23
+ * [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
24
+ * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
25
+
26
+ ### Configuration
27
+
28
+ The following YAML configuration was used to produce this model:
29
+
30
+ ```yaml
31
+ slices:
32
+ - sources:
33
+ - model: AurelPx/Percival_01-7b-slerp
34
+ layer_range: [0, 32]
35
+ - model: yam-peleg/Experiment26-7B
36
+ layer_range: [0, 32]
37
+ merge_method: slerp # This should not be indented under 'sources'
38
+ base_model: yam-peleg/Experiment26-7B
39
+ parameters:
40
+ t:
41
+ - filter: self_attn
42
+ value: [0, 0.5, 0.3, 0.7, 1]
43
+ - filter: mlp
44
+ value: [1, 0.5, 0.7, 0.3, 0]
45
+ - value: 0.5
46
+ dtype: bfloat16
47
+ ```