Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
128k context
horror
llama 3.1
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ Example outputs below.
|
|
60 |
|
61 |
<B>Model Notes:</B>
|
62 |
|
63 |
-
- Detail, prose and fiction writing abilities are significantly increased vs L3 Instruct.
|
64 |
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
|
65 |
- Role-players: Careful raising temp too high as it may affect instruction following.
|
66 |
- This model works with rep pen of 1 or higher, 1.05+ recommended.
|
@@ -70,10 +70,8 @@ Example outputs below.
|
|
70 |
- Output length will vary however this model prefers shortly outputs unless you state the size.
|
71 |
- For creative uses, different quants will produce slightly different output.
|
72 |
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
|
73 |
-
- If you use rope to extend context, increase temp AND instructions detail levels to compensate for "rope issues".
|
74 |
|
75 |
-
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of
|
76 |
-
However this can be extended using "rope" settings up to 32k.
|
77 |
|
78 |
If you use "Command-R" template your output will be very different from using "Llama3" template.
|
79 |
|
@@ -148,4 +146,6 @@ Below are the least creative outputs, prompt is in <B>BOLD</B>.
|
|
148 |
|
149 |
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
|
150 |
|
151 |
-
---
|
|
|
|
|
|
60 |
|
61 |
<B>Model Notes:</B>
|
62 |
|
63 |
+
- Detail, prose and fiction writing abilities are significantly increased vs L3.1 Instruct AND L3 Instruct.
|
64 |
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
|
65 |
- Role-players: Careful raising temp too high as it may affect instruction following.
|
66 |
- This model works with rep pen of 1 or higher, 1.05+ recommended.
|
|
|
70 |
- Output length will vary however this model prefers shortly outputs unless you state the size.
|
71 |
- For creative uses, different quants will produce slightly different output.
|
72 |
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
|
|
|
73 |
|
74 |
+
This is a LLAMA3.1 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.
|
|
|
75 |
|
76 |
If you use "Command-R" template your output will be very different from using "Llama3" template.
|
77 |
|
|
|
146 |
|
147 |
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
|
148 |
|
149 |
+
---
|
150 |
+
|
151 |
+
To be added...
|