netcat420 commited on
Commit
477ad7e
1 Parent(s): 2c82f68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -3
README.md CHANGED
@@ -1,3 +1,68 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ ---
5
+ base_model:
6
+ - netcat420/MFANN3bv0.3
7
+ - netcat420/MFANN3bv0.7
8
+ - liminerity/Phigments12
9
+ - netcat420/MFANN3bv0.6
10
+ library_name: transformers
11
+ tags:
12
+ - mergekit
13
+ - merge
14
+ license: apache-2.0
15
+ ---
16
+ # MFANN3bv0.7.10
17
+
18
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
+
20
+ ## Merge Details
21
+ ### Merge Method
22
+
23
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
24
+
25
+ ### Models Merged
26
+
27
+ The following models were included in the merge:
28
+ * [netcat420/MFANN3bv0.3](https://huggingface.co/netcat420/MFANN3bv0.3)
29
+ * [netcat420/MFANN3bv0.7](https://huggingface.co/netcat420/MFANN3bv0.7)
30
+ * [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
31
+
32
+ ### Configuration
33
+
34
+ The following YAML configuration was used to produce this model:
35
+
36
+ ```yaml
37
+ models:
38
+ - model: netcat420/MFANN3bv0.6
39
+ parameters:
40
+ density: [1, 0.7, 0.1] # density gradient
41
+ weight: 1.0
42
+ - model: netcat420/MFANN3bv0.3
43
+ parameters:
44
+ density: 0.5
45
+ weight: [0, 0.3, 0.7, 1] # weight gradient
46
+ - model: netcat420/MFANN3bv0.7
47
+ parameters:
48
+ density: 0.33
49
+ weight:
50
+ - filter: mlp
51
+ value: 0.5
52
+ - value: 0
53
+ merge_method: ties
54
+ base_model: liminerity/Phigments12
55
+ parameters:
56
+ normalize: true
57
+ int8_mask: true
58
+ dtype: float16
59
+
60
+ ```
61
+
62
+
63
+ System prompt
64
+
65
+ ```
66
+ ### System:
67
+ ### thought-process:
68
+ ```