WesPro commited on
Commit
81eed04
1 Parent(s): 947779d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -34
README.md CHANGED
@@ -1,34 +1,34 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # RPDoctor_Samantha-L3-8B
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * H:\merge\Dr.Samantha-8B + H:\loras\Llama-3-LimaRP-LoRA-8B
22
-
23
- ### Configuration
24
-
25
- The following YAML configuration was used to produce this model:
26
-
27
- ```yaml
28
- models:
29
- - model: H:\merge\Dr.Samantha-8B+H:\loras\Llama-3-LimaRP-LoRA-8B
30
- parameters:
31
- weight: 1.0
32
- merge_method: linear
33
- dtype: float16
34
- ```
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ Dr.RP.Samantha Llama 8B
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * H:\merge\Dr.Samantha-8B + H:\loras\Llama-3-LimaRP-LoRA-8B
22
+
23
+ ### Configuration
24
+
25
+ The following YAML configuration was used to produce this model:
26
+
27
+ ```yaml
28
+ models:
29
+ - model: H:\merge\Dr.Samantha-8B+H:\loras\Llama-3-LimaRP-LoRA-8B
30
+ parameters:
31
+ weight: 1.0
32
+ merge_method: linear
33
+ dtype: float16
34
+ ```