lucyknada commited on
Commit
e759ae1
1 Parent(s): 8c4a441

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ base_model: IntervitensInc/gemma-2-9b-chatml
4
+ model-index:
5
+ - name: magnum-v3-9b-chatml
6
+ results: []
7
+ ---
8
+
9
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9ZBUlmzDCnNmQEdUUbyEL.png)
10
+
11
+ ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml).
12
+ This is the 11th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
13
+
14
+ This model is fine-tuned on top of [IntervitensInc/gemma-2-9b-chatml](IntervitensInc/gemma-2-9b-chatml). (chatMLified gemma-2-9b)
15
+
16
+ ## Prompting
17
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
18
+
19
+ ```py
20
+ """<|im_start|>system
21
+ system prompt<|im_end|>
22
+ <|im_start|>user
23
+ Hi there!<|im_end|>
24
+ <|im_start|>assistant
25
+ Nice to meet you!<|im_end|>
26
+ <|im_start|>user
27
+ Can I ask a question?<|im_end|>
28
+ <|im_start|>assistant
29
+ """
30
+ ```
31
+
32
+ ## SillyTavern templates
33
+
34
+ Below are Instruct and Context templates for use within SillyTavern.
35
+
36
+ <details><summary>context template</summary>
37
+
38
+ ```yaml
39
+ {
40
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
41
+ "example_separator": "",
42
+ "chat_start": "",
43
+ "use_stop_strings": false,
44
+ "allow_jailbreak": false,
45
+ "always_force_name2": true,
46
+ "trim_sentences": false,
47
+ "include_newline": false,
48
+ "single_line": false,
49
+ "name": "Magnum ChatML"
50
+ }
51
+ ```
52
+
53
+ </details><br>
54
+ <details><summary>instruct template</summary>
55
+
56
+ ```yaml
57
+ {
58
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
59
+ "input_sequence": "<|im_start|>user\n",
60
+ "output_sequence": "<|im_start|>assistant\n",
61
+ "last_output_sequence": "",
62
+ "system_sequence": "<|im_start|>system\n",
63
+ "stop_sequence": "<|im_end|>",
64
+ "wrap": false,
65
+ "macro": true,
66
+ "names": true,
67
+ "names_force_groups": true,
68
+ "activation_regex": "",
69
+ "system_sequence_prefix": "",
70
+ "system_sequence_suffix": "",
71
+ "first_output_sequence": "",
72
+ "skip_examples": false,
73
+ "output_suffix": "<|im_end|>\n",
74
+ "input_suffix": "<|im_end|>\n",
75
+ "system_suffix": "<|im_end|>\n",
76
+ "user_alignment_message": "",
77
+ "system_same_as_user": false,
78
+ "last_system_sequence": "",
79
+ "name": "Magnum ChatML"
80
+ }
81
+ ```
82
+
83
+ </details><br>
84
+
85
+ ## Axolotl config
86
+
87
+ <details><summary>See axolotl config</summary>
88
+
89
+ ```yaml
90
+ base_model: IntervitensInc/gemma-2-9b-chatml
91
+ model_type: AutoModelForCausalLM
92
+ tokenizer_type: AutoTokenizer
93
+
94
+ #trust_remote_code: true
95
+
96
+ load_in_8bit: false
97
+ load_in_4bit: false
98
+ strict: false
99
+
100
+ datasets:
101
+ - path: anthracite-org/stheno-filtered-v1.1
102
+ type: sharegpt
103
+ conversation: chatml
104
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
105
+ type: sharegpt
106
+ conversation: chatml
107
+ - path: anthracite-org/nopm_claude_writing_fixed
108
+ type: sharegpt
109
+ conversation: chatml
110
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
111
+ type: sharegpt
112
+ conversation: chatml
113
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
114
+ type: sharegpt
115
+ conversation: chatml
116
+
117
+ shuffle_merged_datasets: true
118
+ default_system_message: "You are an assistant that responds to the user."
119
+ dataset_prepared_path: magnum-v3-9b-data-chatml
120
+ val_set_size: 0.0
121
+ output_dir: ./magnum-v3-9b-chatml
122
+
123
+ sequence_len: 8192
124
+ sample_packing: true
125
+ eval_sample_packing: false
126
+ pad_to_sequence_len:
127
+
128
+ adapter:
129
+ lora_model_dir:
130
+ lora_r:
131
+ lora_alpha:
132
+ lora_dropout:
133
+ lora_target_linear:
134
+ lora_fan_in_fan_out:
135
+
136
+ wandb_project: magnum-9b
137
+ wandb_entity:
138
+ wandb_watch:
139
+ wandb_name: attempt-04-chatml
140
+ wandb_log_model:
141
+
142
+ gradient_accumulation_steps: 8
143
+ micro_batch_size: 1
144
+ num_epochs: 2
145
+ optimizer: paged_adamw_8bit
146
+ lr_scheduler: cosine
147
+ learning_rate: 0.000006
148
+
149
+ train_on_inputs: false
150
+ group_by_length: false
151
+ bf16: auto
152
+ fp16:
153
+ tf32: false
154
+
155
+ gradient_checkpointing: unsloth
156
+ early_stopping_patience:
157
+ resume_from_checkpoint:
158
+ local_rank:
159
+ logging_steps: 1
160
+ xformers_attention:
161
+ flash_attention: false
162
+ eager_attention: true
163
+
164
+ warmup_steps: 50
165
+ evals_per_epoch:
166
+ eval_table_size:
167
+ eval_max_new_tokens:
168
+ saves_per_epoch: 2
169
+ debug:
170
+ deepspeed: deepspeed_configs/zero3_bf16.json
171
+ weight_decay: 0.05
172
+ fsdp:
173
+ fsdp_config:
174
+ special_tokens:
175
+
176
+ ```
177
+ </details><br>
178
+
179
+ ## Credits
180
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
181
+
182
+ We would also like to thank all members of Anthracite who made this finetune possible.
183
+
184
+ - [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
185
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
186
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
187
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
188
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
189
+
190
+ ## Training
191
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
192
+
193
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
194
+
195
+ ## Safety
196
+ ...