nm-research commited on
Commit
c28c229
·
verified ·
1 Parent(s): 0f371f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -139
README.md CHANGED
@@ -1,139 +1,148 @@
1
- ---
2
- tags:
3
- - vllm
4
- - sparsity
5
- pipeline_tag: text-generation
6
- license: llama3.1
7
- base_model: meta-llama/Llama-3.1-8B
8
- ---
9
-
10
- # Sparse-Llama-3.1-8B-2of4
11
-
12
- ## Model Overview
13
- - **Model Architecture:** Llama-3.1-8B
14
- - **Input:** Text
15
- - **Output:** Text
16
- - **Model Optimizations:**
17
- - **Sparsity:** 2:4
18
- - **Release Date:** 11/20/2024
19
- - **Version:** 1.0
20
- - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
21
- - **Model Developers:** Neural Magic
22
-
23
- This is the 2:4 sparse version of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
24
- On the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a **98.37% accuracy recovery**. On the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a **97.3% accuracy recovery**.
25
-
26
-
27
- ### Model Optimizations
28
-
29
- This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of [SparseGPT](https://arxiv.org/abs/2301.00774) through [LLM-Compressor](https://github.com/vllm-project/llm-compressor), and for sparse training with knowledge distillation we utilize [SquareHead approach](https://arxiv.org/abs/2310.06927).
30
-
31
-
32
- ## Deployment with vLLM
33
-
34
- This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
35
-
36
-
37
- ## Evaluation
38
-
39
- This model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1) with the [vLLM](https://docs.vllm.ai/en/stable/) engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3). The evaluation results are summarized below.
40
-
41
-
42
- ### Accuracy
43
-
44
- #### Open LLM Leaderboard evaluation scores
45
-
46
-
47
- <table>
48
- <tr>
49
- <td><strong>Benchmark</strong></td>
50
- <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
51
- <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
52
- </tr>
53
- <tr>
54
- <td>ARC-C (25-shot)</td>
55
- <td style="text-align: center">58.2</td>
56
- <td style="text-align: center">59.4</td>
57
- </tr>
58
- <tr>
59
- <td>MMLU (5-shot)</td>
60
- <td style="text-align: center">65.4</td>
61
- <td style="text-align: center">60.6</td>
62
- </tr>
63
- <tr>
64
- <td>HellaSwag (10-shot)</td>
65
- <td style="text-align: center">82.3</td>
66
- <td style="text-align: center">79.8</td>
67
- </tr>
68
- <tr>
69
- <td>WinoGrande (5-shot)</td>
70
- <td style="text-align: center">78.3</td>
71
- <td style="text-align: center">75.9</td>
72
- </tr>
73
- <tr>
74
- <td>GSM8K (5-shot)</td>
75
- <td style="text-align: center">50.7</td>
76
- <td style="text-align: center">56.3</td>
77
- </tr>
78
- <tr>
79
- <td>TruthfulQA (0-shot)</td>
80
- <td style="text-align: center">44.2</td>
81
- <td style="text-align: center">40.9</td>
82
- </tr>
83
- <tr>
84
- <td><strong>Average Score</strong></td>
85
- <td style="text-align: center"><strong>63.19</strong></td>
86
- <td style="text-align: center"><strong>62.16</strong></td>
87
- </tr>
88
- <tr>
89
- <td><strong>Accuracy Recovery (%)</strong></td>
90
- <td style="text-align: center"><strong>100</strong></td>
91
- <td style="text-align: center"><strong>98.37</strong></td>
92
- </tr>
93
- </table>
94
-
95
-
96
- #### Mosaic Eval Gauntlet evaluation scores
97
-
98
- <table>
99
- <tr>
100
- <td><strong>Benchmark</strong></td>
101
- <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
102
- <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
103
- </tr>
104
- <tr>
105
- <td>World Knowledge</td>
106
- <td style="text-align: center">59.4</td>
107
- <td style="text-align: center">55.6</td>
108
- </tr>
109
- <tr>
110
- <td>Commonsense Reasoning</td>
111
- <td style="text-align: center">49.3</td>
112
- <td style="text-align: center">50.0</td>
113
- </tr>
114
- <tr>
115
- <td>Language Understanding</td>
116
- <td style="text-align: center">69.8</td>
117
- <td style="text-align: center">69.0</td>
118
- </tr>
119
- <tr>
120
- <td>Symbolic Problem Solving</td>
121
- <td style="text-align: center">40.0</td>
122
- <td style="text-align: center">37.1</td>
123
- </tr>
124
- <tr>
125
- <td>Reading Comprehension</td>
126
- <td style="text-align: center">58.2</td>
127
- <td style="text-align: center">57.5</td>
128
- </tr>
129
- <tr>
130
- <td><strong>Average Score</strong></td>
131
- <td style="text-align: center"><strong>55.34</strong></td>
132
- <td style="text-align: center"><strong>53.85</strong></td>
133
- </tr>
134
- <tr>
135
- <td><strong>Accuracy Recovery (%)</strong></td>
136
- <td style="text-align: center"><strong>100</strong></td>
137
- <td style="text-align: center"><strong>97.3</strong></td>
138
- </tr>
139
- </table>
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - sparsity
5
+ pipeline_tag: text-generation
6
+ license: llama3.1
7
+ base_model: meta-llama/Llama-3.1-8B
8
+ ---
9
+
10
+ # Get Started
11
+ Sparse-Llama-3.1 models use 2:4 semi-structured sparsity to deliver 2x model size and compute reduction.
12
+ Explore the [launch blog](https://neuralmagic.com/blog/24-sparse-llama-smaller-models-for-efficient-gpu-inference/) to learn more about Sparse-Llama-3.1 and its potential for efficient, scalable AI deployments.
13
+ You can also find all available models in our [Neural Magic HuggingFace collection](https://huggingface.co/collections/neuralmagic/sparse-llama-31-2of4-673f6e96ae74efa213cf1cff).
14
+
15
+ **Looking to build on top of sparse models?** Whether you aim to reduce deployment costs, improve inference performance, or create highly optimized versions for your enterprise needs, Sparse Llama provides the ideal foundation. These models offer state-of-the-art efficiency with 2:4 structured sparsity, enabling cost-effective scaling without sacrificing accuracy.
16
+ [Connect with us](https://neuralmagic.com/book-a-demo/) to explore how we can help integrate sparsity into your AI workflows.
17
+
18
+
19
+ # Sparse-Llama-3.1-8B-2of4
20
+
21
+ ## Model Overview
22
+ - **Model Architecture:** Llama-3.1-8B
23
+ - **Input:** Text
24
+ - **Output:** Text
25
+ - **Model Optimizations:**
26
+ - **Sparsity:** 2:4
27
+ - **Release Date:** 11/20/2024
28
+ - **Version:** 1.0
29
+ - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
30
+ - **Model Developers:** Neural Magic
31
+
32
+ This is the 2:4 sparse version of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
33
+ On the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a **98.37% accuracy recovery**. On the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a **97.3% accuracy recovery**.
34
+
35
+
36
+ ### Model Optimizations
37
+
38
+ This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of [SparseGPT](https://arxiv.org/abs/2301.00774) through [LLM-Compressor](https://github.com/vllm-project/llm-compressor), and for sparse training with knowledge distillation we utilize [SquareHead approach](https://arxiv.org/abs/2310.06927).
39
+
40
+
41
+ ## Deployment with vLLM
42
+
43
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
44
+
45
+
46
+ ## Evaluation
47
+
48
+ This model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1) with the [vLLM](https://docs.vllm.ai/en/stable/) engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3). The evaluation results are summarized below.
49
+
50
+
51
+ ### Accuracy
52
+
53
+ #### Open LLM Leaderboard evaluation scores
54
+
55
+
56
+ <table>
57
+ <tr>
58
+ <td><strong>Benchmark</strong></td>
59
+ <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
60
+ <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
61
+ </tr>
62
+ <tr>
63
+ <td>ARC-C (25-shot)</td>
64
+ <td style="text-align: center">58.2</td>
65
+ <td style="text-align: center">59.4</td>
66
+ </tr>
67
+ <tr>
68
+ <td>MMLU (5-shot)</td>
69
+ <td style="text-align: center">65.4</td>
70
+ <td style="text-align: center">60.6</td>
71
+ </tr>
72
+ <tr>
73
+ <td>HellaSwag (10-shot)</td>
74
+ <td style="text-align: center">82.3</td>
75
+ <td style="text-align: center">79.8</td>
76
+ </tr>
77
+ <tr>
78
+ <td>WinoGrande (5-shot)</td>
79
+ <td style="text-align: center">78.3</td>
80
+ <td style="text-align: center">75.9</td>
81
+ </tr>
82
+ <tr>
83
+ <td>GSM8K (5-shot)</td>
84
+ <td style="text-align: center">50.7</td>
85
+ <td style="text-align: center">56.3</td>
86
+ </tr>
87
+ <tr>
88
+ <td>TruthfulQA (0-shot)</td>
89
+ <td style="text-align: center">44.2</td>
90
+ <td style="text-align: center">40.9</td>
91
+ </tr>
92
+ <tr>
93
+ <td><strong>Average Score</strong></td>
94
+ <td style="text-align: center"><strong>63.19</strong></td>
95
+ <td style="text-align: center"><strong>62.16</strong></td>
96
+ </tr>
97
+ <tr>
98
+ <td><strong>Accuracy Recovery (%)</strong></td>
99
+ <td style="text-align: center"><strong>100</strong></td>
100
+ <td style="text-align: center"><strong>98.37</strong></td>
101
+ </tr>
102
+ </table>
103
+
104
+
105
+ #### Mosaic Eval Gauntlet evaluation scores
106
+
107
+ <table>
108
+ <tr>
109
+ <td><strong>Benchmark</strong></td>
110
+ <td style="text-align: center"><strong>Llama-3.1-8B</strong></td>
111
+ <td style="text-align: center"><strong>Sparse-Llama-3.1-8B-2of4</strong></td>
112
+ </tr>
113
+ <tr>
114
+ <td>World Knowledge</td>
115
+ <td style="text-align: center">59.4</td>
116
+ <td style="text-align: center">55.6</td>
117
+ </tr>
118
+ <tr>
119
+ <td>Commonsense Reasoning</td>
120
+ <td style="text-align: center">49.3</td>
121
+ <td style="text-align: center">50.0</td>
122
+ </tr>
123
+ <tr>
124
+ <td>Language Understanding</td>
125
+ <td style="text-align: center">69.8</td>
126
+ <td style="text-align: center">69.0</td>
127
+ </tr>
128
+ <tr>
129
+ <td>Symbolic Problem Solving</td>
130
+ <td style="text-align: center">40.0</td>
131
+ <td style="text-align: center">37.1</td>
132
+ </tr>
133
+ <tr>
134
+ <td>Reading Comprehension</td>
135
+ <td style="text-align: center">58.2</td>
136
+ <td style="text-align: center">57.5</td>
137
+ </tr>
138
+ <tr>
139
+ <td><strong>Average Score</strong></td>
140
+ <td style="text-align: center"><strong>55.34</strong></td>
141
+ <td style="text-align: center"><strong>53.85</strong></td>
142
+ </tr>
143
+ <tr>
144
+ <td><strong>Accuracy Recovery (%)</strong></td>
145
+ <td style="text-align: center"><strong>100</strong></td>
146
+ <td style="text-align: center"><strong>97.3</strong></td>
147
+ </tr>
148
+ </table>