icefog72 commited on
Commit
67ffc69
1 Parent(s): 62b7ece

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -4,7 +4,7 @@ library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
9
  # Kunokukulemonchini-7b
10
 
@@ -18,8 +18,8 @@ This model was merged using the SLERP merge method.
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
- * H:\FModels\grimjim\kukulemon-7B
22
- * H:\FModels\Kunocchini-7b-128k-test
23
 
24
  ### Configuration
25
 
@@ -29,12 +29,12 @@ The following YAML configuration was used to produce this model:
29
 
30
  slices:
31
  - sources:
32
- - model: H:\FModels\grimjim\kukulemon-7B
33
  layer_range: [0, 32]
34
- - model: H:\FModels\Kunocchini-7b-128k-test
35
  layer_range: [0, 32]
36
  merge_method: slerp
37
- base_model: H:\FModels\Kunocchini-7b-128k-test
38
  parameters:
39
  t:
40
  - filter: self_attn
@@ -43,4 +43,4 @@ parameters:
43
  value: [1, 0.5, 0.7, 0.3, 0]
44
  - value: 0.5
45
  dtype: float16
46
- ```
 
4
  tags:
5
  - mergekit
6
  - merge
7
+ license: cc-by-nc-4.0
8
  ---
9
  # Kunokukulemonchini-7b
10
 
 
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
+ * grimjim/kukulemon-7B
22
+ * Nitral-AI/Kunocchini-7b-128k-test
23
 
24
  ### Configuration
25
 
 
29
 
30
  slices:
31
  - sources:
32
+ - model: grimjim/kukulemon-7B
33
  layer_range: [0, 32]
34
+ - model: Nitral-AI/Kunocchini-7b-128k-test
35
  layer_range: [0, 32]
36
  merge_method: slerp
37
+ base_model: Nitral-AI/Kunocchini-7b-128k-test
38
  parameters:
39
  t:
40
  - filter: self_attn
 
43
  value: [1, 0.5, 0.7, 0.3, 0]
44
  - value: 0.5
45
  dtype: float16
46
+ ```