Text Generation
Transformers
GGUF
English
mergekit
Mixture of Experts
mixture of experts
Merge
4x8B
Llama3 MOE
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ pipeline_tag: text-generation
|
|
44 |
|
45 |
<img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
|
46 |
|
47 |
-
It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine
|
48 |
models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
|
49 |
|
50 |
This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
|
@@ -96,6 +96,13 @@ Example outputs below.
|
|
96 |
|
97 |
This model is comprised of the following 4 models ("the experts") (in full):
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
|
100 |
|
101 |
This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
|
|
|
44 |
|
45 |
<img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
|
46 |
|
47 |
+
It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models
|
48 |
models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
|
49 |
|
50 |
This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
|
|
|
96 |
|
97 |
This model is comprised of the following 4 models ("the experts") (in full):
|
98 |
|
99 |
+
[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
|
100 |
+
|
101 |
+
-[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2]
|
102 |
+
-[ https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS ]
|
103 |
+
-[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
|
104 |
+
-[ https://huggingface.co/nbeerbower/llama-3-gutenberg-8B ]
|
105 |
+
|
106 |
The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
|
107 |
|
108 |
This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
|