DavidAU commited on
Commit
3d0c635
1 Parent(s): eb70692

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -44,13 +44,15 @@ This model has been designed to be relatively bullet proof and operates with all
44
 
45
  It is an extraordinary compressed model.
46
 
47
- This model differs from orginal "Dark Planet 8B" as follows:
48
 
49
  - 12 layers were added to the base models
50
  - Llama3 instruct was replaced with Llama 3.1 instruct
51
  - All of the "extended" models (changed from 8b to 10.7B) were "DARE-TIED" together in a framework re-arranging the duplicate layers and replacing these carefully.
52
 
53
- These changes result in longer output, longer context, and a slight uptick in fuction of the model.
 
 
54
 
55
  It is for any writing, fiction or roleplay activity.
56
 
@@ -67,9 +69,9 @@ Example outputs below.
67
  - If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
68
  - A lot of GPTisms have been removed. There are still a few however - errrrr.
69
  - This is not a "happy ever after" model. It has a negative bias.
70
- - Output length will vary however this model prefers shortly outputs unless you state the size.
71
  - For creative uses, different quants will produce slightly different output.
72
- - Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
73
 
74
  This is a LLAMA3.1 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.
75
 
 
44
 
45
  It is an extraordinary compressed model.
46
 
47
+ This model differs from original "Dark Planet 8B" as follows:
48
 
49
  - 12 layers were added to the base models
50
  - Llama3 instruct was replaced with Llama 3.1 instruct
51
  - All of the "extended" models (changed from 8b to 10.7B) were "DARE-TIED" together in a framework re-arranging the duplicate layers and replacing these carefully.
52
 
53
+ These changes result in longer output, longer context, and a slight uptick in function of the model.
54
+
55
+ This is the first version using these extension techniques, with more to follow.
56
 
57
  It is for any writing, fiction or roleplay activity.
58
 
 
69
  - If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
70
  - A lot of GPTisms have been removed. There are still a few however - errrrr.
71
  - This is not a "happy ever after" model. It has a negative bias.
72
+ - Output length will vary however this model prefers LONG to VERY LONG outputs unless you state the size or set the maximum output.
73
  - For creative uses, different quants will produce slightly different output.
74
+ - Due to the stability and compressed nature of this model, all quants will operate at above average levels.
75
 
76
  This is a LLAMA3.1 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.
77