Ttimofeyka commited on
Commit
d536b59
1 Parent(s): 95d4456

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
- base_model:
3
- - MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
4
- - maywell/PiVoT-0.1-Starling-LM-RP
5
  tags:
6
  - mergekit
7
  - merge
8
 
9
  ---
10
- # PiVot-Noromaid-NSFW-Mistral-7B-GGUF
11
 
12
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
 
@@ -19,8 +18,8 @@ This model was merged using the SLERP merge method.
19
  ### Models Merged
20
 
21
  The following models were included in the merge:
22
- * [MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1)
23
- * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
24
 
25
  ### Configuration
26
 
@@ -29,18 +28,18 @@ The following YAML configuration was used to produce this model:
29
  ```yaml
30
  slices:
31
  - sources:
32
- - model: maywell/PiVoT-0.1-Starling-LM-RP
33
  layer_range: [0, 32]
34
  - model: MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
35
  layer_range: [0, 32]
36
  merge_method: slerp
37
- base_model: maywell/PiVoT-0.1-Starling-LM-RP
38
  parameters:
39
  t:
40
  - filter: self_attn
41
  value: [0, 0.5, 0.3, 0.7, 1]
42
  - filter: mlp
43
  value: [1, 0.5, 0.7, 0.3, 0]
44
- - value: 0.5
45
  dtype: bfloat16
46
  ```
 
1
  ---
2
+ base_model: []
3
+ library_name: transformers
 
4
  tags:
5
  - mergekit
6
  - merge
7
 
8
  ---
9
+ # merge
10
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
 
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
+ * Undi95/Mistral-RP-0.1-7B
22
+ * MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
23
 
24
  ### Configuration
25
 
 
28
  ```yaml
29
  slices:
30
  - sources:
31
+ - model: Undi95/Mistral-RP-0.1-7B
32
  layer_range: [0, 32]
33
  - model: MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1
34
  layer_range: [0, 32]
35
  merge_method: slerp
36
+ base_model: Undi95/Mistral-RP-0.1-7B
37
  parameters:
38
  t:
39
  - filter: self_attn
40
  value: [0, 0.5, 0.3, 0.7, 1]
41
  - filter: mlp
42
  value: [1, 0.5, 0.7, 0.3, 0]
43
+ - value: 0.5 # fallback for rest of tensors
44
  dtype: bfloat16
45
  ```