Text Generation
GGUF
vllm
sparsity
Inference Endpoints
File size: 6,058 Bytes
6f2fa5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153

---

tags:
- vllm
- sparsity
pipeline_tag: text-generation
license: llama3.1
base_model: meta-llama/Llama-3.1-8B

---

[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)


# QuantFactory/Sparse-Llama-3.1-8B-2of4-GGUF
This is quantized version of [neuralmagic/Sparse-Llama-3.1-8B-2of4](https://huggingface.co/neuralmagic/Sparse-Llama-3.1-8B-2of4) created using llama.cpp

# Original Model Card


# Sparse-Llama-3.1-8B-2of4

## Model Overview
- **Model Architecture:** Llama-3.1-8B
  - **Input:** Text
  - **Output:** Text
- **Model Optimizations:**
  - **Sparsity:** 2:4
- **Release Date:** 11/20/2024
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- **Model Developers:** Neural Magic

This is the 2:4 sparse version of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
On the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a **98.37% accuracy recovery**. On the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a **97.3% accuracy recovery**.


### Model Optimizations

This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of [SparseGPT](https://arxiv.org/abs/2301.00774) through [LLM-Compressor](https://github.com/vllm-project/llm-compressor), and for sparse training with knowledge distillation we utilize [SquareHead approach](https://arxiv.org/abs/2310.06927).


## Deployment with vLLM

This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.


## Evaluation

This model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1) with the [vLLM](https://docs.vllm.ai/en/stable/) engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3). The evaluation results are summarized below.


### Accuracy

#### Open LLM Leaderboard evaluation scores


<table>
    <tr>
        <td><strong>Benchmark</strong></td>
        <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
        <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
    </tr>
    <tr>
        <td>ARC-C (25-shot)</td>
        <td style="text-align: center">58.2</td>
        <td style="text-align: center">59.4</td>
    </tr>
    <tr>
        <td>MMLU (5-shot)</td>
        <td style="text-align: center">65.4</td>
        <td style="text-align: center">60.6</td>
    </tr>
    <tr>
        <td>HellaSwag (10-shot)</td>
        <td style="text-align: center">82.3</td>
        <td style="text-align: center">79.8</td>
    </tr>
    <tr>
        <td>WinoGrande (5-shot)</td>
        <td style="text-align: center">78.3</td>
        <td style="text-align: center">75.9</td>
    </tr>
    <tr>
        <td>GSM8K (5-shot)</td>
        <td style="text-align: center">50.7</td>
        <td style="text-align: center">56.3</td>
    </tr>
    <tr>
        <td>TruthfulQA (0-shot)</td>
        <td style="text-align: center">44.2</td>
        <td style="text-align: center">40.9</td>
    </tr>
    <tr>
        <td><strong>Average Score</strong></td>
        <td style="text-align: center"><strong>63.19</strong></td>
        <td style="text-align: center"><strong>62.16</strong></td>
    </tr>
    <tr>
        <td><strong>Accuracy Recovery (%)</strong></td>
        <td style="text-align: center"><strong>100</strong></td>
        <td style="text-align: center"><strong>98.37</strong></td>
    </tr>
</table>


#### Mosaic Eval Gauntlet evaluation scores

<table>
    <tr>
        <td><strong>Benchmark</strong></td>
        <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
        <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
    </tr>
    <tr>
        <td>World Knowledge</td>
        <td style="text-align: center">59.4</td>
        <td style="text-align: center">55.6</td>
    </tr>
    <tr>
        <td>Commonsense Reasoning</td>
        <td style="text-align: center">49.3</td>
        <td style="text-align: center">50.0</td>
    </tr>
    <tr>
        <td>Language Understanding</td>
        <td style="text-align: center">69.8</td>
        <td style="text-align: center">69.0</td>
    </tr>
    <tr>
        <td>Symbolic Problem Solving</td>
        <td style="text-align: center">40.0</td>
        <td style="text-align: center">37.1</td>
    </tr>
    <tr>
        <td>Reading Comprehension</td>
        <td style="text-align: center">58.2</td>
        <td style="text-align: center">57.5</td>
    </tr>
    <tr>
        <td><strong>Average Score</strong></td>
        <td style="text-align: center"><strong>55.34</strong></td>
        <td style="text-align: center"><strong>53.85</strong></td>
    </tr>
    <tr>
        <td><strong>Accuracy Recovery (%)</strong></td>
        <td style="text-align: center"><strong>100</strong></td>
        <td style="text-align: center"><strong>97.3</strong></td>
    </tr>
</table>