Quantization made by Richard Erkhov.
magnum-v1-72b - GGUF
- Model creator: https://huggingface.co/anthracite-org/
- Original model: https://huggingface.co/anthracite-org/magnum-v1-72b/
Name | Quant method | Size |
---|---|---|
magnum-v1-72b.Q2_K.gguf | Q2_K | 27.76GB |
magnum-v1-72b.IQ3_XS.gguf | IQ3_XS | 30.59GB |
magnum-v1-72b.IQ3_S.gguf | IQ3_S | 32.12GB |
magnum-v1-72b.Q3_K_S.gguf | Q3_K_S | 32.12GB |
magnum-v1-72b.IQ3_M.gguf | IQ3_M | 33.07GB |
magnum-v1-72b.Q3_K.gguf | Q3_K | 35.11GB |
magnum-v1-72b.Q3_K_M.gguf | Q3_K_M | 35.11GB |
magnum-v1-72b.Q3_K_L.gguf | Q3_K_L | 36.79GB |
magnum-v1-72b.IQ4_XS.gguf | IQ4_XS | 37.4GB |
magnum-v1-72b.Q4_0.gguf | Q4_0 | 38.4GB |
magnum-v1-72b.IQ4_NL.gguf | IQ4_NL | 38.9GB |
magnum-v1-72b.Q4_K_S.gguf | Q4_K_S | 40.88GB |
magnum-v1-72b.Q4_K.gguf | Q4_K | 44.16GB |
magnum-v1-72b.Q4_K_M.gguf | Q4_K_M | 44.16GB |
magnum-v1-72b.Q4_1.gguf | Q4_1 | 42.56GB |
magnum-v1-72b.Q5_0.gguf | Q5_0 | 46.72GB |
magnum-v1-72b.Q5_K_S.gguf | Q5_K_S | 47.85GB |
magnum-v1-72b.Q5_K.gguf | Q5_K | 50.71GB |
magnum-v1-72b.Q5_K_M.gguf | Q5_K_M | 50.71GB |
magnum-v1-72b.Q5_1.gguf | Q5_1 | 50.88GB |
magnum-v1-72b.Q6_K.gguf | Q6_K | 10.2GB |
magnum-v1-72b.Q8_0.gguf | Q8_0 | 71.96GB |
Original model description:
language: - en - zh license: other tags: - chat base_model: Qwen/Qwen2-72B-Instruct license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: magnum-72b-v1 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.06 name: strict accuracy - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.06 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 57.65 name: normalized accuracy - type: acc_norm value: 57.65 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 35.27 name: exact match - type: exact_match value: 35.27 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 18.79 name: acc_norm - type: acc_norm value: 18.79 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.62 name: acc_norm - type: acc_norm value: 15.62 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.64 name: accuracy - type: acc value: 49.85 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard
This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen-2 72B Instruct.
Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Credits
This model has been a team effort, and the credits goes to all members of Anthracite.
We'd also like to thank Kearm for sponsoring the compute needed to train this model.
Training
The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x AMD Instinct™ MI300X Accelerators for the full-parameter fine-tuning of the model.
Safety
...
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 42.17 |
IFEval (0-Shot) | 76.06 |
BBH (3-Shot) | 57.65 |
MATH Lvl 5 (4-Shot) | 35.27 |
GPQA (0-shot) | 18.79 |
MuSR (0-shot) | 15.62 |
MMLU-PRO (5-shot) | 49.64 |
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 42.21 |
IFEval (0-Shot) | 76.06 |
BBH (3-Shot) | 57.65 |
MATH Lvl 5 (4-Shot) | 35.27 |
GPQA (0-shot) | 18.79 |
MuSR (0-shot) | 15.62 |
MMLU-PRO (5-shot) | 49.85 |
- Downloads last month
- 96