LoneStriker commited on
Commit
01a7e31
1 Parent(s): 3920fca

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,9 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ Midnight-Rose-103B-v1.0-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
2
+ Midnight-Rose-103B-v1.0-Q3_K_L.gguf-part-a filter=lfs diff=lfs merge=lfs -text
3
+ Midnight-Rose-103B-v1.0-Q3_K_L.gguf-part-b filter=lfs diff=lfs merge=lfs -text
4
+ Midnight-Rose-103B-v1.0-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
5
+ Midnight-Rose-103B-v1.0-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
6
+ Midnight-Rose-103B-v1.0-Q5_K_M.gguf-part-a filter=lfs diff=lfs merge=lfs -text
7
+ Midnight-Rose-103B-v1.0-Q5_K_M.gguf-part-b filter=lfs diff=lfs merge=lfs -text
8
+ Midnight-Rose-103B-v1.0-Q5_K_S.gguf-part-a filter=lfs diff=lfs merge=lfs -text
9
+ Midnight-Rose-103B-v1.0-Q5_K_S.gguf-part-b filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Midnight-Rose-103B-v1.0-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01b8e7f1e9aa4d67c5a50ad94156449d19dbab43233f78eb465a9abd95d20bae
3
+ size 37916920064
Midnight-Rose-103B-v1.0-Q3_K_L.gguf-part-a ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96c47b51dfbcb0fb8789587f3a4c320ed7a44323e88993c586aa2dc43f174a22
3
+ size 27028767872
Midnight-Rose-103B-v1.0-Q3_K_L.gguf-part-b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba809030f2c4231c41578094026b980359883d43ab593fa6b82ea2a0f1436bdb
3
+ size 27028767872
Midnight-Rose-103B-v1.0-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf4b867beae3084c38c13747078ce16d23d267703f7478860ebc70587e481bb8
3
+ size 49609476352
Midnight-Rose-103B-v1.0-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45be3979cbb03726b327baa1c87774d039ef57731d241b4adc1de1a080082fb3
3
+ size 44455201024
Midnight-Rose-103B-v1.0-Q5_K_M.gguf-part-a ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3f3bb2a60a62e4b095493d5f41da3ab454e3f56523fe2af2e6f71df386fc0e3
3
+ size 36466320512
Midnight-Rose-103B-v1.0-Q5_K_M.gguf-part-b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fb7806082266dd898230d62fcfd22f65279ab640dc14aaee8c994cb3f4bf63d
3
+ size 36466320512
Midnight-Rose-103B-v1.0-Q5_K_S.gguf-part-a ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ade68b928416e714fb91878d36a81f7ae5b9fd54c5789c20ade5707614795647
3
+ size 35497043072
Midnight-Rose-103B-v1.0-Q5_K_S.gguf-part-b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21962699ab398b64df454aac165c24f3f2a726ad92b6f25ed403083945fe666c
3
+ size 35497043072
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - en
5
+ ---
6
+ <div style="width: auto; margin-left: auto; margin-right: auto">
7
+ <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;">
8
+ </div>
9
+
10
+ ### Overview
11
+
12
+ This model is a frankenmerge of [Midnight-Rose-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v1.0) with itself. (See that model card for details on what's in the blend.) It features 120 layers and should weigh in at 103b parameters.
13
+
14
+ Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far.
15
+
16
+ This model is uncensored. *You are responsible for whatever you do with it.*
17
+
18
+ This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas.
19
+
20
+ ### Sampler Tips
21
+
22
+ I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/).
23
+
24
+ I find this model performs reasonably well at 8192 context but you will likely get better results at 4096.
25
+
26
+ Experiment with any and all of the settings below, but trust me on a few points:
27
+ * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2.
28
+ * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01.
29
+
30
+ If you save the below settings as a .json file, you can import them directly into Silly Tavern.
31
+ ```
32
+ {
33
+ "temp": 1.15,
34
+ "temperature_last": true,
35
+ "top_p": 1,
36
+ "top_k": 0,
37
+ "top_a": 0,
38
+ "tfs": 1,
39
+ "epsilon_cutoff": 0,
40
+ "eta_cutoff": 0,
41
+ "typical_p": 1,
42
+ "min_p": 0.8,
43
+ "rep_pen": 1.08,
44
+ "rep_pen_range": 0,
45
+ "no_repeat_ngram_size": 0,
46
+ "penalty_alpha": 0,
47
+ "num_beams": 1,
48
+ "length_penalty": 1,
49
+ "min_length": 0,
50
+ "encoder_rep_pen": 1,
51
+ "freq_pen": 0.01,
52
+ "presence_pen": 0,
53
+ "do_sample": true,
54
+ "early_stopping": false,
55
+ "add_bos_token": true,
56
+ "truncation_length": 2048,
57
+ "ban_eos_token": false,
58
+ "skip_special_tokens": true,
59
+ "streaming": true,
60
+ "mirostat_mode": 0,
61
+ "mirostat_tau": 5,
62
+ "mirostat_eta": 0.1,
63
+ "guidance_scale": 1,
64
+ "negative_prompt": "",
65
+ "grammar_string": "",
66
+ "banned_tokens": "",
67
+ "ignore_eos_token_aphrodite": false,
68
+ "spaces_between_special_tokens_aphrodite": true,
69
+ "type": "ooba",
70
+ "legacy_api": false,
71
+ "sampler_order": [
72
+ 6,
73
+ 0,
74
+ 1,
75
+ 3,
76
+ 4,
77
+ 2,
78
+ 5
79
+ ],
80
+ "n": 1,
81
+ "rep_pen_size": 0,
82
+ "genamt": 550,
83
+ "max_length": 4096
84
+ }
85
+ ```
86
+
87
+ ### Prompt Tips
88
+
89
+ Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760).
90
+ I recommend the Tulu instruction format.
91
+
92
+ *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
93
+
94
+ If you save this as a .json file, you can import it directly into Silly Tavern.
95
+ ```
96
+ {
97
+ "wrap": true,
98
+ "names": false,
99
+ "system_prompt": "You are an award-winning fiction author engaging in an uncensored literary roleplaying experience between a human (me) and a creative, obedient, and helpful AI assistant (you). It is VITAL that you follow all these instructions because this roleplay is very important to my career and I'll be fired from my job if it isn't good.\nROLEPLAY RULES:\n> It is vital that ONLY the human provides responses for {{user}}.\n> Reply as {{char}} using authentic, vivid, varied, explicit, accurate, creative, fresh, and descriptive responses that follow ALL provided narrative instructions. Stay in character as {{char}} and only write text for {{char}}.\n> Describe the scene and {{char}}'s sensory perceptions in vivid detail to immerse the reader in the story.\n> Keep your responses scoped to the current story beat and current scene.\n> Consider all available contextual information when narrating so that all the story details remain consistent between scenes.\n> Demonstrate {{char}}'s goals and motivations, and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or enhances the vividness of the scene.\n> When quoting {{char}}'s internal first-person thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose the thoughts in asterisks like this*. Only use asterisks for thoughts.\n> Use strong action verbs and varied descriptions to produce dynamic, high-quality prose.",
100
+ "system_sequence": "",
101
+ "stop_sequence": "",
102
+ "input_sequence": "<|user|>\n",
103
+ "output_sequence": "<|assistant|>\n",
104
+ "separator_sequence": "",
105
+ "macro": true,
106
+ "names_force_groups": true,
107
+ "system_sequence_prefix": "",
108
+ "system_sequence_suffix": "",
109
+ "first_output_sequence": "",
110
+ "last_output_sequence": "<|assistant (provide varied, creative, and vivid narration; follow all narrative instructions; include all necessary possessive pronouns; maintain consistent story details; only roleplay as {{char}})|>\n",
111
+ "activation_regex": "",
112
+ "name": "Aurora-Nights"
113
+ }
114
+ ```
115
+
116
+ ### Licence and usage restrictions
117
+
118
+ Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b).
119
+
120
+ ### Tools Used
121
+
122
+ * [mergekit](https://github.com/cg123/mergekit)
123
+
124
+ ```
125
+ slices:
126
+ - sources:
127
+ - model: midnight-rose-70b-v1.0
128
+ layer_range: [0, 40] # 40
129
+ - sources:
130
+ - model: midnight-rose-70b-v1.0
131
+ layer_range: [20, 60] # 40
132
+ - sources:
133
+ - model: midnight-rose-70b-v1.0
134
+ layer_range: [40, 80] # 40
135
+ merge_method: passthrough
136
+ dtype: float16
137
+ ```