lucyknada commited on
Commit
70be594
·
verified ·
1 Parent(s): f850fa2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +198 -0
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ License: apache-2.0
3
+ Language:
4
+ - En
5
+ Pipeline_tag: text-generation
6
+ Base_model: 01-ai/Yi-1.5-34B-32K
7
+ Tags:
8
+ - Chat
9
+ ---
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9yEmnTDG9bcC_bxwuDU6G.png)
12
+
13
+ ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v3-34b).
14
+ This is the 9th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
15
+
16
+ This model is fine-tuned on top of [Yi-1.5-34 B-32 K](https://huggingface.co/01-ai/Yi-1.5-34B-32K).
17
+
18
+ ## Prompting
19
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
20
+
21
+ ```py
22
+ """<|im_start|>system
23
+ system prompt<|im_end|>
24
+ <|im_start|>user
25
+ Hi there!<|im_end|>
26
+ <|im_start|>assistant
27
+ Nice to meet you!<|im_end|>
28
+ <|im_start|>user
29
+ Can I ask a question?<|im_end|>
30
+ <|im_start|>assistant
31
+ """
32
+ ```
33
+
34
+ ## SillyTavern templates
35
+
36
+ Below are Instruct and Context templates for use within SillyTavern.
37
+
38
+ In our testing a min_p of 0.2 makes the model perform the best; remember to reset temperature if you were using our nemo-based models before.
39
+
40
+ <details><summary>context template</summary>
41
+
42
+ ```yaml
43
+ {
44
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
45
+ "example_separator": "",
46
+ "chat_start": "",
47
+ "use_stop_strings": false,
48
+ "allow_jailbreak": false,
49
+ "always_force_name2": true,
50
+ "trim_sentences": false,
51
+ "include_newline": false,
52
+ "single_line": false,
53
+ "name": "Magnum ChatML"
54
+ }
55
+ ```
56
+
57
+ </details><br>
58
+ <details><summary>instruct template</summary>
59
+
60
+ ```yaml
61
+ {
62
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
63
+ "input_sequence": "<|im_start|>user\n",
64
+ "output_sequence": "<|im_start|>assistant\n",
65
+ "last_output_sequence": "",
66
+ "system_sequence": "<|im_start|>system\n",
67
+ "stop_sequence": "<|im_end|>",
68
+ "wrap": false,
69
+ "macro": true,
70
+ "names": true,
71
+ "names_force_groups": true,
72
+ "activation_regex": "",
73
+ "system_sequence_prefix": "",
74
+ "system_sequence_suffix": "",
75
+ "first_output_sequence": "",
76
+ "skip_examples": false,
77
+ "output_suffix": "<|im_end|>\n",
78
+ "input_suffix": "<|im_end|>\n",
79
+ "system_suffix": "<|im_end|>\n",
80
+ "user_alignment_message": "",
81
+ "system_same_as_user": false,
82
+ "last_system_sequence": "",
83
+ "name": "Magnum ChatML"
84
+ }
85
+ ```
86
+
87
+ </details><br>
88
+
89
+ ## Axolotl config
90
+
91
+ <details><summary>See axolotl config</summary>
92
+
93
+ ```yaml
94
+ base_model: 01-ai/Yi-1.5-34B-32K
95
+ model_type: AutoModelForCausalLM
96
+ tokenizer_type: AutoTokenizer
97
+
98
+ #trust_remote_code: true
99
+
100
+ load_in_8bit: false
101
+ load_in_4bit: false
102
+ strict: false
103
+
104
+ datasets:
105
+ - path: anthracite-org/stheno-filtered-v1.1
106
+ type: sharegpt
107
+ conversation: chatml
108
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
109
+ type: sharegpt
110
+ conversation: chatml
111
+ - path: anthracite-org/nopm_claude_writing_fixed
112
+ type: sharegpt
113
+ conversation: chatml
114
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
115
+ type: sharegpt
116
+ conversation: chatml
117
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
118
+ type: sharegpt
119
+ conversation: chatml
120
+ chat_template: chatml
121
+ shuffle_merged_datasets: true
122
+ default_system_message: "You are an assistant that responds to the user."
123
+ dataset_prepared_path: magnum-v2-34b-1.5-data
124
+ val_set_size: 0.0
125
+ output_dir: ./magnum-v2-34b-32k-r1
126
+
127
+ sequence_len: 8192
128
+ sample_packing: true
129
+ eval_sample_packing: false
130
+ pad_to_sequence_len:
131
+
132
+ adapter:
133
+ lora_model_dir:
134
+ lora_r:
135
+ lora_alpha:
136
+ lora_dropout:
137
+ lora_target_linear:
138
+ lora_fan_in_fan_out:
139
+
140
+ wandb_project: magnum-v2-34b-1.5-32k
141
+ wandb_entity:
142
+ wandb_watch:
143
+ wandb_name: attempt-01
144
+ wandb_log_model:
145
+
146
+ gradient_accumulation_steps: 8
147
+ micro_batch_size: 1
148
+ num_epochs: 2
149
+ optimizer: paged_adamw_8bit
150
+ lr_scheduler: cosine
151
+ learning_rate: 0.000006
152
+
153
+ train_on_inputs: false
154
+ group_by_length: false
155
+ bf16: auto
156
+ fp16:
157
+ tf32: false
158
+
159
+ gradient_checkpointing: unsloth
160
+ early_stopping_patience:
161
+ resume_from_checkpoint:
162
+ local_rank:
163
+ logging_steps: 1
164
+ xformers_attention:
165
+ flash_attention: true
166
+
167
+ warmup_steps: 50
168
+ evals_per_epoch:
169
+ eval_table_size:
170
+ eval_max_new_tokens:
171
+ saves_per_epoch: 2
172
+ debug:
173
+ deepspeed: deepspeed_configs/zero3_bf16.json
174
+ weight_decay: 0.05
175
+ fsdp:
176
+ fsdp_config:
177
+ special_tokens:
178
+ ```
179
+ </details><br>
180
+
181
+ ## Credits
182
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
183
+
184
+ We would also like to thank all members of Anthracite who made this finetune possible.
185
+
186
+ - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered)
187
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
188
+ - [lodrick-the-lafted/NopmWritingStruct](https://huggingface.co/datasets/lodrick-the-lafted/NopmWritingStruct)
189
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
190
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
191
+
192
+ ## Training
193
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
194
+
195
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
196
+
197
+ ## Safety
198
+ ...