Text Generation
Safetensors
qwen2
chat
conversational
Eval Results
4-bit precision
luigi86 commited on
Commit
e98fa4c
1 Parent(s): 762caa8

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ru
10
+ - zh
11
+ - ja
12
+ license: other
13
+ tags:
14
+ - chat
15
+ base_model: Qwen/Qwen2-72B-Instruct
16
+ datasets:
17
+ - Doctor-Shotgun/C2-Stheno
18
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
19
+ - anthracite-org/nopm_claude_writing_fixed
20
+ license_name: tongyi-qianwen
21
+ license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE
22
+ pipeline_tag: text-generation
23
+ model-index:
24
+ - name: magnum-v2-72b
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 75.6
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 57.85
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 31.65
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 18.12
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 14.18
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 49.51
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-72b
117
+ name: Open LLM Leaderboard
118
+ ---
119
+
120
+ # MLX Format and Quantizations for Magnum v2 72b
121
+
122
+ Quantized to 4 bpw precision and tested using the `mlx_lm` utility on a 64GiB URAM M1 Max.
123
+
124
+ ## Notes on using:
125
+
126
+ Requires and optimized for Apple Silicon. Fast enough for rapid back-and-forth as long as it fits on your URAM.
127
+
128
+ I tried to serve this with `mlx_lm.serve` per usual, but I got python string indexing errors no matter what I did. It works fine with LM Studio in OpenAI mode.
129
+
130
+ I used this with SillyTavern, it worked well.
131
+
132
+ See [original model](https://huggingface.co/anthracite-org/magnum-v2-72b) for further details.
133
+
134
+ Larger, 8bpw quants available at [mlx-community](https://huggingface.co/mlx-community/magnum-v2-72b).
135
+
136
+ # Original Model card
137
+
138
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/u8B-5bEeroN549uxUIisV.png)
139
+
140
+ This is the seventh (Lucky!) in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct).
141
+
142
+ ## Prompting
143
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
144
+
145
+ ```py
146
+ """<|im_start|>user
147
+ Hi there!<|im_end|>
148
+ <|im_start|>assistant
149
+ Nice to meet you!<|im_end|>
150
+ <|im_start|>user
151
+ Can I ask a question?<|im_end|>
152
+ <|im_start|>assistant
153
+ """
154
+ ```
155
+
156
+ ## Credits
157
+ - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered)
158
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
159
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
160
+
161
+ This model has been a team effort, and the credits goes to all members of Anthracite.
162
+
163
+ ## Training
164
+ The training was done for 2 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model.
165
+
166
+ We also trained with a weight decay of 0.01 to help further stabilize the loss trajectory and mitigate catastrophic forgetting, and utilize a peak learning rate of 4e-6 to prevent the 2nd epoch loss from dropping too significantly (as it is a strong indicator of overfitting).
167
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hVd5gNqSLOlWTkUb0A7iE.png)
168
+
169
+ Sample Packing was done for 16k tokens rather than the 8k tokens used in our previous runs.
170
+
171
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
172
+
173
+ ## Safety
174
+ ...
175
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
176
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_anthracite-org__magnum-v2-72b)
177
+
178
+ | Metric |Value|
179
+ |-------------------|----:|
180
+ |Avg. |41.15|
181
+ |IFEval (0-Shot) |75.60|
182
+ |BBH (3-Shot) |57.85|
183
+ |MATH Lvl 5 (4-Shot)|31.65|
184
+ |GPQA (0-shot) |18.12|
185
+ |MuSR (0-shot) |14.18|
186
+ |MMLU-PRO (5-shot) |49.51|
187
+
188
+
189
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
190
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_anthracite-org__magnum-v2-72b)
191
+
192
+ | Metric |Value|
193
+ |-------------------|----:|
194
+ |Avg. |41.15|
195
+ |IFEval (0-Shot) |75.60|
196
+ |BBH (3-Shot) |57.85|
197
+ |MATH Lvl 5 (4-Shot)|31.65|
198
+ |GPQA (0-shot) |18.12|
199
+ |MuSR (0-shot) |14.18|
200
+ |MMLU-PRO (5-shot) |49.51|
201
+
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "eos_token_id": 151645,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 8192,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 29568,
11
+ "max_position_embeddings": 32768,
12
+ "max_window_layers": 80,
13
+ "model_type": "qwen2",
14
+ "num_attention_heads": 64,
15
+ "num_hidden_layers": 80,
16
+ "num_key_value_heads": 8,
17
+ "quantization": {
18
+ "group_size": 64,
19
+ "bits": 4
20
+ },
21
+ "quantization_config": {
22
+ "group_size": 64,
23
+ "bits": 4
24
+ },
25
+ "rms_norm_eps": 1e-06,
26
+ "rope_theta": 1000000.0,
27
+ "sliding_window": null,
28
+ "tie_word_embeddings": false,
29
+ "torch_dtype": "bfloat16",
30
+ "transformers_version": "4.43.4",
31
+ "use_cache": false,
32
+ "use_sliding_window": false,
33
+ "vocab_size": 152064
34
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf6cb2336bd6567bc5602cb02971b9ff971aacba656fc51fb6305ba1c80e8b90
3
+ size 5365567669
model-00002-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:422eef2c7933a01c9268c5885b3e7356394372f1b9d8117cf44b77a13c72d7e4
3
+ size 5294878254
model-00003-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:459afd45085e438cf2bad806f11e0cbd4913713b4c5b34ab7c66731d401febbe
3
+ size 5346171097
model-00004-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8e53cdaf20a46968290a2cf0922eee000a733e5f3aa7844ccdf8437ea956810
3
+ size 5294845297
model-00005-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a3fdca3bd8c4a87867abe3b8e1fd57e4d77f39b5b58ed9743f1c7cf7268b3fe
3
+ size 5294878217
model-00006-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5de7441a2a58450c5b700216a203b3a4efb44858d8ded4b3534b8648e147855
3
+ size 5294878204
model-00007-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed2646d3348f270cf9820272ab0ec78517147b001620a2cf8eb3bba476418356
3
+ size 5346171091
model-00008-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22a7846478c49fedf03ca7415e0e363295795d43e7c816856a3c9e5738d4edff
3
+ size 3663161100
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|im_end|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcfe42da0a4497e8b2b172c1f9f4ec423a46dc12907f4349c55025f670422ba9
3
+ size 11418266
tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": [
30
+ "<|im_start|>",
31
+ "<|im_end|>"
32
+ ],
33
+ "bos_token": null,
34
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
35
+ "clean_up_tokenization_spaces": false,
36
+ "eos_token": "<|im_end|>",
37
+ "errors": "replace",
38
+ "model_max_length": 131072,
39
+ "pad_token": "<|endoftext|>",
40
+ "split_special_tokens": false,
41
+ "tokenizer_class": "Qwen2Tokenizer",
42
+ "unk_token": null
43
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff