Kotokin commited on
Commit
bdc482b
1 Parent(s): 60dfcb1

Upload 9 files

Browse files
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: yi-34b
4
+ license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
5
+ tags:
6
+ - merge
7
+ - roleplay
8
+ - exl2
9
+ - not-for-all-audiences
10
+ ---
11
+ ORIGIGNAL MODEL LINK https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B
12
+
13
+ Hi, this is the rp-stew-v2 model enlarged up to 120 layers. To be honest, I don't know why, but someone might need it. I'm just testing it myself, compared to the original.
14
+ I will post the exl2 quantization of 4 bits soon.
15
+
16
+
17
+ # Merged-Vicuna-RP-Stew-68B
18
+
19
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
+
21
+ ## Merge Details
22
+
23
+ New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.
24
+
25
+ Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!
26
+
27
+ ### Settings
28
+
29
+ Temperature @ 0.93
30
+
31
+ Min-P @ 0.02
32
+
33
+ Typical-P @ 0.9
34
+
35
+ Repetition Penalty @ 1.07
36
+
37
+ Repetition Range @ 2048
38
+
39
+ Smoothing Factor @ 0.39
40
+
41
+ Smoothing Curve @ 2
42
+
43
+ Everything else @ off
44
+
45
+ Early Stopping = X
46
+
47
+ Do Sample = ✓
48
+
49
+ Add BOS Token = X
50
+
51
+ Ban EOS Token = ✓
52
+
53
+ Skip Special Tokens = ✓
54
+
55
+ Temperature Last = ✓
56
+
57
+ Custom Stopping Strings: "< / s >" (<---without spaces)
58
+
59
+ However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.
60
+
61
+ ---
62
+
63
+ You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!
64
+
65
+ <10 CHAT COMMANDMENTS>
66
+ * 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.
67
+ * 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.
68
+ * 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.
69
+ * 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.
70
+ * 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.
71
+ * 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.
72
+ * 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.
73
+ * 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.
74
+ * 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.
75
+ * 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.
76
+
77
+ ---
78
+ Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):
79
+
80
+ ...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').
81
+
82
+ It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.
83
+
84
+ For settings that are more *in depth* try this:
85
+
86
+ https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B-exl2-4.65/discussions/1?not-for-all-audiences=true
87
+
88
+ ### Prompt Format: Chat-Vicuna
89
+
90
+ ```
91
+ SYSTEM:
92
+ {system_prompt}<|im_end|>
93
+ USER:
94
+ {prompt}<|im_end|>
95
+ ASSISTANT:
96
+ {output}<|im_end|>
97
+ ```
98
+
99
+ Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.
100
+
101
+ ### Models Merged
102
+
103
+ The following models were included in the merge:
104
+
105
+ https://huggingface.co/NousResearch/Nous-Capybara-34B
106
+
107
+ https://huggingface.co/migtissera/Tess-34B-v1.5b
108
+
109
+ https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2
110
+
111
+ https://huggingface.co/maywell/PiVoT-SUS-RP
112
+
113
+ https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama
114
+
115
+ https://huggingface.co/NeverSleep/CausalLM-RP-34B
116
+
117
+ https://huggingface.co/chargoddard/Yi-34B-200K-Llama
model-00021-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea8069f72e2504583cf3dff238277f036d654a0c0eed355d81dcec3cfc65f0b6
3
+ size 5930923184
model-00022-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09b37583a2b5ee77230a0573377bad64351ba736402fead369b07486dbfd46b8
3
+ size 5967637528
model-00023-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2532151a50e8feb6baf17b0b7a929d8310d4c07e0dd4bc18512773981fc1b0b
3
+ size 5872203312
model-00024-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2388deff4bf536942ab80924db788033c57bbd99832bd5cd801ea089cf4b460a
3
+ size 587231720
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
3
+ size 1033105
tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<|startoftext|>",
31
+ "clean_up_tokenization_spaces": false,
32
+ "eos_token": "<|endoftext|>",
33
+ "legacy": false,
34
+ "model_max_length": 200000,
35
+ "pad_token": "<unk>",
36
+ "padding_side": "right",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "truncation_side": "right",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }