Raincleared commited on
Commit
8907410
1 Parent(s): c5f94a9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md CHANGED
@@ -1,3 +1,148 @@
1
  ---
 
 
 
2
  license: llama2
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
  license: llama2
6
+
7
  ---
8
+
9
+
10
+ # ProSparse-LLaMA-2-13B
11
+
12
+ - Model creator: [Meta](https://huggingface.co/meta-llama)
13
+ - Original model: [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf)
14
+ - Fine-tuned by: [THUNLP](https://nlp.csai.tsinghua.edu.cn/) and [ModelBest](modelbest.cn)
15
+
16
+ ### Introduction
17
+
18
+ The utilization of activation sparsity, namely the existence of considerable weakly-contributed elements among activation outputs, is a promising method for inference acceleration of large language models (LLMs) ([Liu et al., 2023](https://proceedings.mlr.press/v202/liu23am/liu23am.pdf); [Song et al., 2023](https://arxiv.org/pdf/2312.12456.pdf)). Concretely, acceleration methods based on activation sparsity usually achieve higher inference speed by making wiser resource allocation and computation policies to avoid resource waste on these weakly-contributed parameters.
19
+
20
+ Adopting ReLU as the activation function is a straightforward method to achieve activation sparsity. However, most recent mainstream LLMs adopt activation functions without intrinsic sparsity (e.g., GELU and Swish). Some efforts ([Zhang et al., 2022](https://aclanthology.org/2022.findings-acl.71.pdf); [Mirzadeh et al., 2023](https://arxiv.org/pdf/2310.04564.pdf); [Zhang et al., 2024](https://arxiv.org/pdf/2402.03804.pdf)) introduce ReLU or its variants as the substitutive activation function to help non-ReLU LLMs achieve activation sparsity and inference acceleration, but few can concurrently obtain high sparsity and comparable task-specific performance.
21
+
22
+ In this work, we introduce a lossless sparsification method named "ProSparse" to push for higher activation sparsity without performance degradation. By applying ProSparse to Swish-activated LLaMA2-7B and LLaMA2-13B, we obtain ReLU-activated LLaMA2 models with high sparsity of 89.32% and 88.80% respectively while their performance is comparable to the original version. Further inference acceleration experiments demonstrate the practical speedup effects of higher sparsity on both [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf) and our two sparse GPU [operators](https://github.com/Raincleared-Song/sparse_gpu_operator).
23
+
24
+ ### Training Dataset
25
+
26
+ We train the 13B model on about 134.22 billion tokens within 16,000 steps, including a mixture of the following two categories of data.
27
+
28
+ - Language modeling datasets:
29
+
30
+ * StarCoder
31
+
32
+ * Wikipedia
33
+
34
+ * Pile
35
+ * Other collected datasets
36
+
37
+ - Instruction tuning datasets:
38
+
39
+ - UltraChat
40
+ - P3 (multiple-choice QA)
41
+ - PAQ
42
+ - Unnatural Instructions
43
+ - Flan
44
+ - Super-Natural Instructions
45
+ - Other collected datasets
46
+
47
+ Intuitively, training the model with even more tokens or with data of a wider coverage and higher quality will obtain better task-specific performance.
48
+
49
+ ### ProSparse: Training Methodology
50
+
51
+ The training process of ProSparse consists of three steps (refer to Section 3.2 of [paper](TODO) for more details):
52
+
53
+ 1. **Activation Function Substitution**: We substituting the activation function of FFNs with ReLU and applying continual training;
54
+ 2. **Progressive Sparsity Regularization**: We jointly optimize the model on the conventional next-token prediction loss and \\(L_1\\) regularization loss. The regularization is applied to the sparse intermediate outputs of FFNs with a regularization factor increasing progressively in multiple stages. Specifically, the regularization factor \\(\lambda\\) is set to a small constant for the warmup stage, and then increases along a smooth sine curve for each of the subsequent incremental stages. Each stage is accompanied by certain steps of training. In this way, the model can have more time to adapt to the increasing regularization without radical activation shifts, thus alleviating performance degradation.
55
+ 3. **Activation Threshold Shifting**: We finally replace ReLU with FATReLU ([Kurtz et al., 2020](https://proceedings.mlr.press/v119/kurtz20a/kurtz20a.pdf)), a ReLU variant with a positive threshold. This can prune those non-zero weakly-contributed elements in activation outputs and further boost sparsity.
56
+
57
+ The 13B model is trained on 32 A100 GPUs. The learning rate (LR) is controlled by a cosine scheduler with a peak LR of \\(5e-5\\). The hyper-parameters for each stage (including the regularization factor \\(\lambda_i\\), the accumulated training steps \\(T_i\\), and the accumulated training tokens) are shown as follows:
58
+
59
+ | Step Number \\(i\\) | \\(\lambda_i\\) | \\(T_i\\) | Accumulated Tokens (B) |
60
+ | :-------------: | :---------: | :----: | :--------------------: |
61
+ | 0 | 0 | 5,500 | 46.14 |
62
+ | 1 | \\(5e-3\\) | 6,750 | 56.62 |
63
+ | 2 | \\(1e-2\\) | 10,750 | 90.18 |
64
+ | 3 | \\(1e-2\\) | 11,000 | 92.27 |
65
+ | 4 | \\(2e-2\\) | 15,000 | 125.83 |
66
+ | 5 | \\(2e-2\\) | 16,000 | 134.22 |
67
+
68
+ ### Evaluation Benckmarks
69
+
70
+ - **Code Generation**: We compute the average pass@1 scores on HumanEval (0-shot) and MBPP (3-shot).
71
+
72
+ - **Commonsense Reasoning**: We report the average 0-shot perplexity (PPL) on PIQA, SIQA, HellaSwag, WinoGrande, and COPA.
73
+
74
+ - **Reading Comprehension**: We compute the average 0-shot PPL on BoolQ, 0-shot accuracy on LAMBADA and TyDi QA.
75
+
76
+ - **Other Popular Benchmarks**: We report the average accuracies on GSM8K (8-shot), MMLU (5-shot), Big Bench Hard (BBH) (3-shot), and the average PPL on AGI-Eval (0-shot).
77
+
78
+ ### Evaluation Results
79
+
80
+ The evaluation results on the above benchmarks demonstrate the advantage of ProSparse, which is the only method achieving high sparsity and comparable performance to the original Swish-activated LLaMA2. Note that models under all settings are trained with the same number of tokens on the same mixed dataset. Refer to Section 4.2 of [paper](TODO) for more details.
81
+
82
+ | Setting | Average<br>Sparsity | Code<br>Generation | Commonsense<br>Reasoning | Reading<br>Comprehension | GSM8K | MMLU | BBH | AGI Eval | Average |
83
+ | :-------------------: | :-----------------: | :----------------: | :----------------------: | :----------------------: | :---: | :---: | :---: | :---------: | :-----: |
84
+ | Original-7B | - | 16.37 | 69.59 | 61.87 | 12.96 | 44.45 | 32.96 | 27.53 | 37.96 |
85
+ | ReluLLaMA-7B | 66.98 | 15.85 | 69.64 | 70.54 | 5.84 | 38.64 | 35.07 | 27.73 | 37.62 |
86
+ | Vanilla ReLU-7B | 66.04 | 21.31 | 70.73 | 73.22 | 11.22 | 49.22 | 36.11 | 28.01 | 41.40 |
87
+ | Shifted ReLU-7B | 69.59 | 20.50 | 70.09 | 73.17 | 13.87 | 48.54 | 35.20 | 27.94 | 41.33 |
88
+ | Fixed \\(L_1\\)-7B | 91.46 | 18.85 | 66.01 | 55.39 | 2.27 | 32.28 | 31.40 | 26.48 | 33.24 |
89
+ | **ProSparse-7B**\* | 88.11 | 19.47 | 66.29 | 63.33 | 12.74 | 45.21 | 33.59 | 27.55 | 38.31 |
90
+ | **ProSparse-7B** | 89.32 | 19.42 | 66.27 | 63.50 | 12.13 | 45.48 | 34.99 | 27.46 | 38.46 |
91
+ | Original-13B | - | 20.19 | 72.58 | 71.55 | 22.21 | 54.69 | 37.89 | 29.33 | 44.06 |
92
+ | ReluLLaMA-13B | 71.56 | 20.19 | 70.44 | 73.29 | 18.50 | 50.58 | 37.97 | 28.22 | 42.74 |
93
+ | **ProSparse-13B**\* | 87.97 | 29.03 | 69.75 | 67.54 | 25.40 | 54.78 | 40.20 | 28.76 | 45.07 |
94
+ | **ProSparse-13B** | 88.80 | 28.42 | 69.76 | 66.91 | 26.31 | 54.35 | 39.90 | 28.67 | 44.90 |
95
+
96
+ **Notes**: "Original" refers to the original Swish-activated LLaMA2 versions. ReluLLaMA-7B and ReluLLaMA-13B are available at [7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) and [13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) respectively. "ProSparse-7B\*" and "ProSparse-13B\*" denote the ProSparse versions without activation threshold shifting.
97
+
98
+ ### Inference Acceleration Effects
99
+
100
+ First, we utilize [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf), a state-of-the-art acceleration framework leveraging activation sparsity. As its inference speed and accuracy heavily rely on the performance of activation predictors, we report the activation recall and predicted sparsity (i.e., two key metrics for evaluating the activation predictor) as well as the number of tokens generated per second by PowerInfer (with one A100 GPU and sufficient CPUs).
101
+
102
+ Moreover, considering the potential inference inaccuracies caused by wrong predictions of activation predictors, we implement two sparse GPU [operators](https://github.com/Raincleared-Song/sparse_gpu_operator) for faster accurate inference utilizing activation sparsity. They are responsible for the speedup of two key steps in a gated FFN:
103
+
104
+ - Step (2) (`S2`): a fused operator of ReLU and \\(\mathbf{s} \odot (\mathbf{x} \mathbf{W}_1^T)\\);
105
+ - Step (3) (`S3`): a sparse matrix-vector multiplication operator \\(\mathbf{x}_1 \mathbf{W}_2^T\\).
106
+
107
+ where \\(\mathbf{s}\\), \\(\mathbf{x}\\), \\(\mathbf{x}_1\\), and \\(\odot\\) denote the gating scores, the FFN input hidden states, the intermediate outputs, and the element-wise multiplication respectively. \\(\mathbf{W}_1\\) and \\(\mathbf{W}_2\\) are FFN weight matrices.
108
+
109
+ The acceleration effects of LLMs with different sparsity are displayed as follows. ProSparse, which reaches a high sparsity without performance degradation, can gain the most benefits among all the settings concerned. Refer to Section 4.3 of [paper](TODO) for more details.
110
+
111
+ | Setting | Average<br>Sparsity | Activation<br>Recall | Predicted<br>Sparsity | PowerInfer<br>Speed | `S2`<br>Time | `S2`<br>Speedup | `S3`<br/>Time | `S3`<br/>Speedup |
112
+ | :-------------------: | :-----------------: | :------------------: | :-------------------: | :-----------------: | :--------------: | :-----------------: | :---------------: | :------------------: |
113
+ | ReluLLaMA-7B | 66.98 | 90.89 | 58.95 | 11.37 | 67.12 | 1.35 | 63.00 | 1.32 |
114
+ | Vanilla ReLU-7B | 66.04 | 87.72 | 72.57 | 12.04 | 67.85 | 1.33 | 63.28 | 1.31 |
115
+ | Fixed \\(L_1\\)-7B | 91.46 | 94.51 | 82.85 | 19.62 | 40.99 | 2.21 | 54.19 | 1.53 |
116
+ | **ProSparse-7B**\* | 88.11 | 93.46 | 75.24 | 16.30 | 46.66 | 1.94 | 55.56 | 1.49 |
117
+ | **ProSparse-7B** | 89.32 | 92.34 | 78.75 | - | 45.38 | 2.00 | 55.05 | 1.51 |
118
+ | ReluLLaMA-13B | 71.56 | 86.41 | 71.93 | 6.59 | 69.92 | 1.88 | 75.47 | 1.51 |
119
+ | **ProSparse-13B**\* | 87.97 | 91.02 | 77.93 | 8.67 | 55.29 | 2.38 | 67.50 | 1.68 |
120
+ | **ProSparse-13B** | 88.80 | 91.11 | 78.28 | - | 53.78 | 2.44 | 66.73 | 1.70 |
121
+
122
+ **Notes**: Fixed \\(L_1\\) suffers from severe performance degradation. ProSparse with Activation Threshold Shifting is not supported by PowerInfer. "Time" means the average wall-clock time (us) cost by each step with our sparse GPU operators, and "Speedup" is the speedup ratio to the setting without operators. The average time for step (2) and (3) without sparse GPU operators is about **90.55 and 82.92 (us) for 7B, 131.36 and 113.68 (us) for 13B** respectively under all sparsity.
123
+
124
+ ### License Disclaimer
125
+
126
+ This model is bound by the license & usage restrictions of the original Llama-2 model and comes with no warranty or guarantees of any kind.
127
+
128
+ ### Limitations & Biases
129
+
130
+ Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned variant's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
131
+
132
+ Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
133
+
134
+ ### Citation
135
+
136
+ Please kindly cite using the following BibTeX:
137
+
138
+ ```bibtex
139
+ @article{song2024prosparse,
140
+ title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models},
141
+ author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong},
142
+ year={2024},
143
+ }
144
+ ```
145
+
146
+ #### Acknowledgments
147
+
148
+ The model card is modified from [ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B).