Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
gemma
mergekit
Inference Endpoints
conversational
File size: 2,035 Bytes
35715b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- gemma
- mergekit
pipeline_tag: text-generation
---
(quants uploading...)
<h3>Gemma-The-Writer-DEADLINE-10B-GGUF</h3>
<img src="the-writer.jpg" style="float:right; width:300px; height:300px; padding:10px;">
This is a Gemma2 model merge of the top storytelling / writing models as noted at EQBench, tuned specifically for fiction, story, and writing.
Due to high stability and compressed nature of the model you can also use it for general use too, including roleplay.
This model requires GEMMA Instruct template, and has 8k context window but is extendable via rope to 32k or higher.
This version - "Deadline" - is a modifed version of "Gemma The Writer 9B" ( [ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ] ) and has been modified with a
Brainstorm 5x adapter to alter output generation.
This adds close to 1B parameters to the model raising it to 46 layers, 510 tensors to a total of 10B parameters.
The addition of Brainstorm has altered the prose, sentence structure, reduced GPTISMS, and generally improved the model's performance.
Recommended Rep Pen of 1.02 or higher, temp range 0-5.
Example outputs below.
<B>Models Used:</b>
This is a high precision "DARE TIES" merge at the layer level (each layer per model adjusted - 168 points of adjustment over the 4 models) comprised of these models:
[ https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B ]
[ https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 ]
[ https://huggingface.co/ifable/gemma-2-Ifable-9B ]
[ https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO ]
Special thanks to all the model makers. Great work!
---
<h3>Example Prompts With Outputs</h3>
----
|