DavidAU commited on
Commit
99400f7
1 Parent(s): b9e22ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -48
README.md CHANGED
@@ -1,48 +1,28 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # MN-Three-RCM-Instruct1-2f
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using E:/MN-Rocinante-12B-v1.1-Instruct as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * E:/MN-12B-Celeste-V1.9-Instruct
22
- * E:/MN-magnum-v2.5-12b-kto-Instruct
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- # Config 1
30
- # E:/MN-Rocinante-12B-v1.1-Instruct
31
- # E:/MN-12B-Celeste-V1.9-Instruct
32
- # E:/MN-magnum-v2.5-12b-kto-Instruct
33
-
34
- models:
35
- - model: E:/MN-Rocinante-12B-v1.1-Instruct
36
- - model: E:/MN-magnum-v2.5-12b-kto-Instruct
37
- parameters:
38
- weight: .6
39
- density: .8
40
- - model: E:/MN-12B-Celeste-V1.9-Instruct
41
- parameters:
42
- weight: .38
43
- density: .6
44
- merge_method: dare_ties
45
- tokenizer_source: union
46
- base_model: E:/MN-Rocinante-12B-v1.1-Instruct
47
- dtype: bfloat16
48
- ```
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - mergekit
5
+ - merge
6
+ base_model: []
7
+ ---
8
+
9
+ <h2>MN-WORDSTORM-pt5-RCM-Extra-Intense-18.5B-Instruct</h2>
10
+
11
+ This is part 5 in a 10 part series.
12
+
13
+ This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
14
+ The source code can also be used directly.
15
+
16
+ For full information about this model, including:
17
+
18
+ - Details about this model and its use case(s).
19
+ - Context limits
20
+ - Special usage notes / settings.
21
+ - Any model(s) used to create this model.
22
+ - Template(s) used to access/use this model.
23
+ - Example generation(s)
24
+ - GGUF quants of this model
25
+
26
+ Please go to:
27
+
28
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt5-RCM-Extra-Intense-18.5B-Instruct-gguf ]