DavidAU commited on
Commit
35715b1
1 Parent(s): a7023cf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - creative
7
+ - creative writing
8
+ - fiction writing
9
+ - plot generation
10
+ - sub-plot generation
11
+ - fiction writing
12
+ - story generation
13
+ - scene continue
14
+ - storytelling
15
+ - fiction story
16
+ - science fiction
17
+ - romance
18
+ - all genres
19
+ - story
20
+ - writing
21
+ - vivid prosing
22
+ - vivid writing
23
+ - fiction
24
+ - roleplaying
25
+ - bfloat16
26
+ - swearing
27
+ - rp
28
+ - horror
29
+ - gemma
30
+ - mergekit
31
+ pipeline_tag: text-generation
32
+ ---
33
+
34
+ (quants uploading...)
35
+
36
+ <h3>Gemma-The-Writer-DEADLINE-10B-GGUF</h3>
37
+
38
+ <img src="the-writer.jpg" style="float:right; width:300px; height:300px; padding:10px;">
39
+
40
+ This is a Gemma2 model merge of the top storytelling / writing models as noted at EQBench, tuned specifically for fiction, story, and writing.
41
+
42
+ Due to high stability and compressed nature of the model you can also use it for general use too, including roleplay.
43
+
44
+ This model requires GEMMA Instruct template, and has 8k context window but is extendable via rope to 32k or higher.
45
+
46
+ This version - "Deadline" - is a modifed version of "Gemma The Writer 9B" ( [ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ] ) and has been modified with a
47
+ Brainstorm 5x adapter to alter output generation.
48
+
49
+ This adds close to 1B parameters to the model raising it to 46 layers, 510 tensors to a total of 10B parameters.
50
+
51
+ The addition of Brainstorm has altered the prose, sentence structure, reduced GPTISMS, and generally improved the model's performance.
52
+
53
+ Recommended Rep Pen of 1.02 or higher, temp range 0-5.
54
+
55
+ Example outputs below.
56
+
57
+ <B>Models Used:</b>
58
+
59
+ This is a high precision "DARE TIES" merge at the layer level (each layer per model adjusted - 168 points of adjustment over the 4 models) comprised of these models:
60
+
61
+ [ https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B ]
62
+
63
+ [ https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 ]
64
+
65
+ [ https://huggingface.co/ifable/gemma-2-Ifable-9B ]
66
+
67
+ [ https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO ]
68
+
69
+ Special thanks to all the model makers. Great work!
70
+
71
+ ---
72
+
73
+ <h3>Example Prompts With Outputs</h3>
74
+
75
+ ----
76
+