Text Generation
GGUF
vllm
sparsity
Inference Endpoints
aashish1904 commited on
Commit
6f2fa5b
1 Parent(s): f749b15

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ tags:
5
+ - vllm
6
+ - sparsity
7
+ pipeline_tag: text-generation
8
+ license: llama3.1
9
+ base_model: meta-llama/Llama-3.1-8B
10
+
11
+ ---
12
+
13
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
14
+
15
+
16
+ # QuantFactory/Sparse-Llama-3.1-8B-2of4-GGUF
17
+ This is quantized version of [neuralmagic/Sparse-Llama-3.1-8B-2of4](https://huggingface.co/neuralmagic/Sparse-Llama-3.1-8B-2of4) created using llama.cpp
18
+
19
+ # Original Model Card
20
+
21
+
22
+ # Sparse-Llama-3.1-8B-2of4
23
+
24
+ ## Model Overview
25
+ - **Model Architecture:** Llama-3.1-8B
26
+ - **Input:** Text
27
+ - **Output:** Text
28
+ - **Model Optimizations:**
29
+ - **Sparsity:** 2:4
30
+ - **Release Date:** 11/20/2024
31
+ - **Version:** 1.0
32
+ - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
33
+ - **Model Developers:** Neural Magic
34
+
35
+ This is the 2:4 sparse version of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
36
+ On the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a **98.37% accuracy recovery**. On the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a **97.3% accuracy recovery**.
37
+
38
+
39
+ ### Model Optimizations
40
+
41
+ This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of [SparseGPT](https://arxiv.org/abs/2301.00774) through [LLM-Compressor](https://github.com/vllm-project/llm-compressor), and for sparse training with knowledge distillation we utilize [SquareHead approach](https://arxiv.org/abs/2310.06927).
42
+
43
+
44
+ ## Deployment with vLLM
45
+
46
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
47
+
48
+
49
+ ## Evaluation
50
+
51
+ This model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1) with the [vLLM](https://docs.vllm.ai/en/stable/) engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3). The evaluation results are summarized below.
52
+
53
+
54
+ ### Accuracy
55
+
56
+ #### Open LLM Leaderboard evaluation scores
57
+
58
+
59
+ <table>
60
+ <tr>
61
+ <td><strong>Benchmark</strong></td>
62
+ <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
63
+ <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
64
+ </tr>
65
+ <tr>
66
+ <td>ARC-C (25-shot)</td>
67
+ <td style="text-align: center">58.2</td>
68
+ <td style="text-align: center">59.4</td>
69
+ </tr>
70
+ <tr>
71
+ <td>MMLU (5-shot)</td>
72
+ <td style="text-align: center">65.4</td>
73
+ <td style="text-align: center">60.6</td>
74
+ </tr>
75
+ <tr>
76
+ <td>HellaSwag (10-shot)</td>
77
+ <td style="text-align: center">82.3</td>
78
+ <td style="text-align: center">79.8</td>
79
+ </tr>
80
+ <tr>
81
+ <td>WinoGrande (5-shot)</td>
82
+ <td style="text-align: center">78.3</td>
83
+ <td style="text-align: center">75.9</td>
84
+ </tr>
85
+ <tr>
86
+ <td>GSM8K (5-shot)</td>
87
+ <td style="text-align: center">50.7</td>
88
+ <td style="text-align: center">56.3</td>
89
+ </tr>
90
+ <tr>
91
+ <td>TruthfulQA (0-shot)</td>
92
+ <td style="text-align: center">44.2</td>
93
+ <td style="text-align: center">40.9</td>
94
+ </tr>
95
+ <tr>
96
+ <td><strong>Average Score</strong></td>
97
+ <td style="text-align: center"><strong>63.19</strong></td>
98
+ <td style="text-align: center"><strong>62.16</strong></td>
99
+ </tr>
100
+ <tr>
101
+ <td><strong>Accuracy Recovery (%)</strong></td>
102
+ <td style="text-align: center"><strong>100</strong></td>
103
+ <td style="text-align: center"><strong>98.37</strong></td>
104
+ </tr>
105
+ </table>
106
+
107
+
108
+ #### Mosaic Eval Gauntlet evaluation scores
109
+
110
+ <table>
111
+ <tr>
112
+ <td><strong>Benchmark</strong></td>
113
+ <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
114
+ <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
115
+ </tr>
116
+ <tr>
117
+ <td>World Knowledge</td>
118
+ <td style="text-align: center">59.4</td>
119
+ <td style="text-align: center">55.6</td>
120
+ </tr>
121
+ <tr>
122
+ <td>Commonsense Reasoning</td>
123
+ <td style="text-align: center">49.3</td>
124
+ <td style="text-align: center">50.0</td>
125
+ </tr>
126
+ <tr>
127
+ <td>Language Understanding</td>
128
+ <td style="text-align: center">69.8</td>
129
+ <td style="text-align: center">69.0</td>
130
+ </tr>
131
+ <tr>
132
+ <td>Symbolic Problem Solving</td>
133
+ <td style="text-align: center">40.0</td>
134
+ <td style="text-align: center">37.1</td>
135
+ </tr>
136
+ <tr>
137
+ <td>Reading Comprehension</td>
138
+ <td style="text-align: center">58.2</td>
139
+ <td style="text-align: center">57.5</td>
140
+ </tr>
141
+ <tr>
142
+ <td><strong>Average Score</strong></td>
143
+ <td style="text-align: center"><strong>55.34</strong></td>
144
+ <td style="text-align: center"><strong>53.85</strong></td>
145
+ </tr>
146
+ <tr>
147
+ <td><strong>Accuracy Recovery (%)</strong></td>
148
+ <td style="text-align: center"><strong>100</strong></td>
149
+ <td style="text-align: center"><strong>97.3</strong></td>
150
+ </tr>
151
+ </table>
152
+