Masterjp123 commited on
Commit
c0ecacd
1 Parent(s): f17a801

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -4
README.md CHANGED
@@ -10,17 +10,47 @@ tags:
10
  - merge
11
 
12
  ---
13
- # merged
 
14
 
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
 
 
16
 
17
  ## Merge Details
 
 
 
 
 
 
 
18
 
19
- Made as a test model, not sure about quality, probably will not make any quants unless someone finds out it's good and asks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  ### Merge Method
22
 
23
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
24
 
25
  ### Models Merged
26
 
 
10
  - merge
11
 
12
  ---
13
+ # Model
14
+ This is the Bf16 unquantized version of SnowyRP And the First Public Release of a Model in the SnowyRP series of models!
15
 
16
+ [BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)
17
+
18
+ [GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ)
19
+
20
+ [GGUF](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GGUF)
21
+
22
+ Any Future Quantizations I am made aware of will be added.
23
 
24
  ## Merge Details
25
+ just originally made V2beta to be a test, But it seems like it is good, So I am quantizing it.
26
+
27
+ These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.
28
+
29
+ This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.
30
+
31
+ ## Model Use:
32
 
33
+ This model is very good... WITH THE RIGHT SETTINGS.
34
+ I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.
35
+ ```
36
+ Optimal Settings (so far)
37
+
38
+ Microstat Mode: 2
39
+ tau: 2.95
40
+ eta: 0.05
41
+
42
+ Dynamic Temp
43
+ min: 0.25
44
+ max: 1.8
45
+
46
+ Cut offs
47
+ epsilon: 3
48
+ eta: 3
49
+ ```
50
 
51
  ### Merge Method
52
 
53
+ This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
54
 
55
  ### Models Merged
56