Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,53 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
library_name: transformers
|
5 |
+
license: llama2
|
6 |
---
|
7 |
+
|
8 |
+
|
9 |
+
|
10 |
+
### Background
|
11 |
+
|
12 |
+
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
|
13 |
+
|
14 |
+
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
|
15 |
+
|
16 |
+
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
|
17 |
+
|
18 |
+
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
|
19 |
+
|
20 |
+
### Dataset
|
21 |
+
|
22 |
+
We pretrain the model on 100 billion tokens, including:
|
23 |
+
|
24 |
+
* Refinedweb
|
25 |
+
* SlimPajama
|
26 |
+
|
27 |
+
|
28 |
+
### Training Hyper-parameters
|
29 |
+
|
30 |
+
|
31 |
+
| Parameter | Value |
|
32 |
+
|-----------------------|-------------|
|
33 |
+
| Batch_Size | 4M |
|
34 |
+
| GPUs | 64xA100(80G)|
|
35 |
+
| LR_Scheduler | cosine |
|
36 |
+
| LR | 3e-4 |
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
### Citation:
|
42 |
+
|
43 |
+
Please kindly cite using the following BibTeX:
|
44 |
+
|
45 |
+
```bibtex
|
46 |
+
@article{zhang2024relu2,
|
47 |
+
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
|
48 |
+
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
|
49 |
+
journal = {arXiv preprint arXiv:2402.03804},
|
50 |
+
year={2024},
|
51 |
+
}
|
52 |
+
```
|
53 |
+
|