ProdeusUnity
commited on
Commit
•
72ed94d
1
Parent(s):
dd27f9f
Update README.md
Browse files
README.md
CHANGED
@@ -6,26 +6,47 @@ tags:
|
|
6 |
- merge
|
7 |
|
8 |
---
|
9 |
-
# Prismatic
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
12 |
|
13 |
## Merge Details
|
14 |
### Merge Method
|
15 |
|
16 |
-
This model was merged using the
|
17 |
|
18 |
### Models Merged
|
19 |
|
|
|
20 |
The following models were included in the merge:
|
21 |
-
* /nbeerbower_Mistral-Nemo-Prism-12B-v5
|
22 |
-
* /inflatebot_MN-12B-Mag-Mell-R1
|
23 |
|
24 |
-
|
|
|
25 |
|
|
|
26 |
The following YAML configuration was used to produce this model:
|
27 |
|
28 |
-
```yaml
|
29 |
models:
|
30 |
- model: /inflatebot_MN-12B-Mag-Mell-R1
|
31 |
parameters:
|
@@ -35,14 +56,10 @@ models:
|
|
35 |
parameters:
|
36 |
weight: 0.4
|
37 |
density: 0.75
|
38 |
-
base_model: /
|
39 |
parameters:
|
40 |
epsilon: 0.05
|
41 |
normalize: true
|
42 |
lambda: 1
|
43 |
-
int8_mask: true
|
44 |
merge_method: ties
|
45 |
dtype: bfloat16
|
46 |
-
|
47 |
-
|
48 |
-
```
|
|
|
6 |
- merge
|
7 |
|
8 |
---
|
9 |
+
# Prismatic 12b v0.1 Experimental 11/15
|
10 |
+
|
11 |
+
## This is a fix for ChatML format, since before it did not have an EOS token
|
12 |
+
*The sparkling courage I longed for, what I got is small... My tears are surely the prism of tomorrow... Say "Hello!" to the ideal future, let's go see them~*
|
13 |
+
|
14 |
+
Listen to the song on youtube: https://www.youtube.com/watch?v=v3I6EVlyPx4
|
15 |
+
|
16 |
+
One off merge for a friend, though it came out rather good, I like it, so try it?
|
17 |
+
|
18 |
+
mistralai/Mistral-Nemo-Base-2407
|
19 |
+
inflatebot/MN-12b-Mag-Mell-R1
|
20 |
+
nbeerbower/Mistral-Nemo-Prism-12B-v5
|
21 |
+
|
22 |
+
License for this model Apache 2.0
|
23 |
+
|
24 |
+
|
25 |
+
Format: Mistral Tekken or ChatML
|
26 |
+
|
27 |
+
Thank you to AuriAetherwiing for helping me merge the models and for providing compute (A40).
|
28 |
+
|
29 |
+
Details
|
30 |
+
|
31 |
|
32 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
33 |
|
34 |
## Merge Details
|
35 |
### Merge Method
|
36 |
|
37 |
+
This model was merged using the ties merge method using mistralai_Mistral-Nemo-Base-2407 as a base.
|
38 |
|
39 |
### Models Merged
|
40 |
|
41 |
+
Models Merged
|
42 |
The following models were included in the merge:
|
|
|
|
|
43 |
|
44 |
+
/inflatebot_MN-12B-Mag-Mell-R1
|
45 |
+
/nbeerbower_Mistral-Nemo-Prism-12B-v5
|
46 |
|
47 |
+
#### Configuration
|
48 |
The following YAML configuration was used to produce this model:
|
49 |
|
|
|
50 |
models:
|
51 |
- model: /inflatebot_MN-12B-Mag-Mell-R1
|
52 |
parameters:
|
|
|
56 |
parameters:
|
57 |
weight: 0.4
|
58 |
density: 0.75
|
59 |
+
base_model: /mistralai_Mistral-Nemo-Base-2407
|
60 |
parameters:
|
61 |
epsilon: 0.05
|
62 |
normalize: true
|
63 |
lambda: 1
|
|
|
64 |
merge_method: ties
|
65 |
dtype: bfloat16
|
|
|
|
|
|