Delta-Vector commited on
Commit
8a68000
·
verified ·
1 Parent(s): a42d541

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +216 -0
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ License: agpl-3.0
3
+ Language:
4
+ - En
5
+ Pipeline_tag: text-generation
6
+ Base_model: nvidia/Mistral-NeMo-Minitron-8B-Base
7
+ Tags:
8
+ - Chat
9
+ license: agpl-3.0
10
+ datasets:
11
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
12
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
13
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
14
+ - anthracite-org/nopm_claude_writing_fixed
15
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ - anthracite-org/kalo_misc_part2
18
+ tags:
19
+ - chat
20
+ language:
21
+ - en
22
+ base_model:
23
+ - nvidia/Mistral-NeMo-Minitron-8B-Base
24
+ ---
25
+
26
+ ![](https://huggingface.co/Delta-Vector/Tor-8B/resolve/main/FinalTor8B.jpg)
27
+
28
+ # These are GGUF quantizations for Odin-9B, for the weights, go [here](https://huggingface.co/Delta-Vector/Tor-8B)
29
+
30
+ An earlier checkpoint of [Darkens-8B](https://huggingface.co/Delta-Vector/Darkens-8B) using the same configuration that i felt was different enough from it's 4 epoch cousin to release, Finetuned ontop of the Prune/Distill NeMo 8B done by Nvidia, This model aims to have generally good prose and writing while not falling into claude-isms.
31
+
32
+
33
+ # Quants
34
+
35
+ GGUF: https://huggingface.co/Delta-Vector/Tor-8B-GGUF
36
+
37
+ EXL2: https://huggingface.co/Delta-Vector/Tor-8B-EXL2
38
+
39
+
40
+ ## Prompting
41
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
42
+
43
+ ```py
44
+ """<|im_start|>system
45
+ system prompt<|im_end|>
46
+ <|im_start|>user
47
+ Hi there!<|im_end|>
48
+ <|im_start|>assistant
49
+ Nice to meet you!<|im_end|>
50
+ <|im_start|>user
51
+ Can I ask a question?<|im_end|>
52
+ <|im_start|>assistant
53
+ """
54
+ ```
55
+ ## System Prompting
56
+
57
+ I would highly recommend using Sao10k's Euryale System prompt, But the "Roleplay Simple" system prompt provided within SillyTavern will work aswell.
58
+
59
+ ```
60
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
61
+
62
+ <Guidelines>
63
+ • Maintain the character persona but allow it to evolve with the story.
64
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
65
+ • All types of outputs are encouraged; respond accordingly to the narrative.
66
+ • Include dialogues, actions, and thoughts in each response.
67
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
68
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
69
+ • Incorporate onomatopoeia when suitable.
70
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
71
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
72
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
73
+ </Guidelines>
74
+
75
+ <Forbidden>
76
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
77
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
78
+ • Repetitive and monotonous outputs.
79
+ • Positivity bias in your replies.
80
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
81
+ </Forbidden>
82
+
83
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
84
+
85
+ ```
86
+
87
+
88
+ ## Axolotl config
89
+
90
+ <details><summary>See axolotl config</summary>
91
+
92
+ Axolotl version: `0.4.1`
93
+ ```yaml
94
+ base_model: Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML
95
+ model_type: AutoModelForCausalLM
96
+ tokenizer_type: AutoTokenizer
97
+
98
+ plugins:
99
+ - axolotl.integrations.liger.LigerPlugin
100
+ liger_rope: true
101
+ liger_rms_norm: true
102
+ liger_swiglu: true
103
+ #liger_cross_entropy: true
104
+ liger_fused_linear_cross_entropy: true
105
+
106
+ load_in_8bit: false
107
+ load_in_4bit: false
108
+ strict: false
109
+
110
+ datasets:
111
+ - path: PRIVATE CLAUDE LOG FILTER
112
+ type: sharegpt
113
+ conversation: chatml
114
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
115
+ type: sharegpt
116
+ conversation: chatml
117
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
118
+ type: sharegpt
119
+ conversation: chatml
120
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
121
+ type: sharegpt
122
+ conversation: chatml
123
+ - path: anthracite-org/nopm_claude_writing_fixed
124
+ type: sharegpt
125
+ conversation: chatml
126
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
127
+ type: sharegpt
128
+ conversation: chatml
129
+ - path: anthracite-org/kalo_opus_misc_240827
130
+ type: sharegpt
131
+ conversation: chatml
132
+ - path: anthracite-org/kalo_misc_part2
133
+ type: sharegpt
134
+ conversation: chatml
135
+ chat_template: chatml
136
+ shuffle_merged_datasets: false
137
+ default_system_message: "You are a helpful assistant that responds to the user."
138
+ dataset_prepared_path: /workspace/data/8b-nemo-fft-data
139
+ val_set_size: 0.0
140
+ output_dir: /workspace/data/8b-nemo-fft-out
141
+
142
+ sequence_len: 16384
143
+ sample_packing: true
144
+ eval_sample_packing: false
145
+ pad_to_sequence_len: true
146
+
147
+ adapter:
148
+ lora_model_dir:
149
+ lora_r:
150
+ lora_alpha:
151
+ lora_dropout:
152
+ lora_target_linear:
153
+ lora_fan_in_fan_out:
154
+
155
+ wandb_project: 8b-nemoprune-fft
156
+ wandb_entity:
157
+ wandb_watch:
158
+ wandb_name: attempt-01
159
+ wandb_log_model:
160
+
161
+ gradient_accumulation_steps: 2
162
+ micro_batch_size: 2
163
+ num_epochs: 4
164
+ optimizer: adamw_bnb_8bit
165
+ lr_scheduler: cosine
166
+ learning_rate: 0.00001
167
+
168
+ train_on_inputs: false
169
+ group_by_length: false
170
+ bf16: auto
171
+ fp16:
172
+ tf32: false
173
+
174
+ gradient_checkpointing: true
175
+ early_stopping_patience:
176
+ resume_from_checkpoint: /workspace/workspace/thing
177
+ local_rank:
178
+ logging_steps: 1
179
+ xformers_attention:
180
+ flash_attention: true
181
+
182
+ warmup_steps: 10
183
+ evals_per_epoch:
184
+ eval_table_size:
185
+ eval_max_new_tokens:
186
+ saves_per_epoch: 1
187
+ debug:
188
+ deepspeed: deepspeed_configs/zero3_bf16.json
189
+ weight_decay: 0.001
190
+ fsdp:
191
+ fsdp_config:
192
+ special_tokens:
193
+ pad_token: <pad>
194
+
195
+
196
+ ```
197
+
198
+ </details><br>
199
+
200
+ ## Credits
201
+
202
+
203
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
204
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
205
+ - [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
206
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
207
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
208
+ - [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
209
+ - [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
210
+ - [Private Claude Log filter](https://google.com)
211
+
212
+
213
+ ## Training
214
+ The training was done for 4 epochs. (This model is the 2 epoch checkpoint), I used 10 x [A40s](https://www.nvidia.com/en-us/data-center/a40/) GPUs graciously provided by [Kalomaze](https://huggingface.co/kalomaze) for the full-parameter fine-tuning of the model.
215
+
216
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)