Feature Extraction
Transformers
Safetensors
English
bamboo
custom_code
yixinsong commited on
Commit
4739ad2
1 Parent(s): f5ec1b1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Intro
2
+ Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM). Among various approaches, a mixture of experts (MoE) methods (exemplified by models such as Mixtral) show particular promise. MoE works by selectively activating different model components (experts), thereby optimizing resource usage.
3
+ Recent studies (Zhang et al., 2021; Liu et al., 2023; Mirzadeh et al., 2023) have shown that LLM inherently exhibits properties that favor sparse computation when using the ReLU activation function. This insight opens new avenues for model efficiency, similar to the selective activation of MoE. By dynamically selecting model parameters for calculations, we can significantly improve efficiency.
4
+ However, the widespread adoption of ReLU-based models in the LLM field is still limited. To inspire more research for inference efficiency, we introduce our Mistral-level ReLU-based LLM model, XXX.
5
+ Model Architecture
6
+ As the ReGLU-based LLM has limited sparsity, for example, ReLULLaMA has just nearly 70% sparsity. To further push the sparsity, we add a relu component after GLU. So our FFN network works as follows:
7
+ class XXXMLP(nn.Module):
8
+ def __init__(self, config):
9
+ super().__init__()
10
+ self.config = config
11
+ self.hidden_size = config.hidden_size
12
+ self.intermediate_size = config.intermediate_size
13
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
14
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
15
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
16
+ self.act_fn = ACT2FN[config.hidden_act]
17
+
18
+ def forward(self, x):
19
+ return self.down_proj(self.act_fn(self.gate_proj(x)) * self.act_fn(self.up_proj(x)))
20
+ Training Details
21
+ In this subsection, we will introduce the details of training our model, including types of data used, and hyperparameters.
22
+ We initialized the model weights to Mistral's model weights and modified the FFN structure to the ReGLU+ReLU structure, then continued pre-training for 200B tokens, divided into two phases:
23
+ First phase: For the proportion of training corpus, we followed the data mix ratio and sources of the StableLM-3B model, conducting a further pre-training with 150B tokens.(link)
24
+ The following table shows the hyper-paramters we used in our training process.
25
+ |-------------------------|----------------------|
26
+ | GPUs | 64 80G-A100 |
27
+ | Learning Rate Control | Cosine |
28
+ | Peak Learning Rate | 5e-5 |
29
+ | Batch Size | 4M |
30
+ | Weight Decay | 0.1 |
31
+ Second phase: We further adjusted the training corpus ratio, incorporating more domain-specific datasets(Math、Coding), and continued training for 50B tokens.
32
+ |-------------------------|----------------------|
33
+ | GPUs | 64 80G-A100 |
34
+ | Learning Rate Control | Cosine |
35
+ | Peak Learning Rate | 5e-6 |
36
+ | Batch Size | 4M |
37
+ | Weight Decay | 0.01 |
38
+ Performance Evaluation Results
39
+ Our evaluation is based on the framework lm-evaluation-harness and opencompass. The evaluation details are listed as follows:
40
+ - Huggingface LLM Leaderboard tasks.
41
+ - Commonsense: We report the average of PIQA, SIQA, ARC easy and challenge and CommonsenseQA.
42
+ - Other Popular Benchmarks: We report the average accuracies on Big Bench Hard (BBH) (3-shot), HumanEval, MBPP, MATH.
43
+
44
+ | MMLU | Winogrande | TruthfulQA | Hellaswag | GSM8K | Arc-C | | | | | | |
45
+ |---------|------------|------------|-----------|--------|--------|--------|---|---|---|---|---|
46
+ | Ours | 0.6389 | 0.7593 | 0.4406 | 0.8217 | 0.5315 | 0.6195 | | | | | |
47
+ | Mistral | 0.6265 | 0.7924 | 0.4262 | 0.8332 | 0.4018 | 0.6143 | | | | | |
48
+
49
+
50
+
51
+
52
+ Speed Evaluation Results
53
+ We utilize PowerInfer, a state-of-the-art acceleration framework leveraging activation sparsity. Here we show the inference speed compared with llama.cpp/transformers.