DavidAU commited on
Commit
4774686
1 Parent(s): 07adc00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -44,7 +44,7 @@ pipeline_tag: text-generation
44
 
45
  <img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
46
 
47
- It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine FOUR "Dark Planet"
48
  models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
49
 
50
  This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
@@ -96,6 +96,13 @@ Example outputs below.
96
 
97
  This model is comprised of the following 4 models ("the experts") (in full):
98
 
 
 
 
 
 
 
 
99
  The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
100
 
101
  This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
 
44
 
45
  <img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
46
 
47
+ It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models
48
  models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
49
 
50
  This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
 
96
 
97
  This model is comprised of the following 4 models ("the experts") (in full):
98
 
99
+ [ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
100
+
101
+ -[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2]
102
+ -[ https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS ]
103
+ -[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
104
+ -[ https://huggingface.co/nbeerbower/llama-3-gutenberg-8B ]
105
+
106
  The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
107
 
108
  This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"