Feature Extraction
Transformers
Safetensors
English
bamboo
custom_code
hodlen commited on
Commit
d19c610
1 Parent(s): bda84dd

Fix hyperlinks in README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ## Introducation
2
 
3
- Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM). Among various approaches, a mixture of experts (MoE) methods (exemplified by models such as [Mixtral]([mistralai/Mixtral-8x7B-v0.1 · Hugging Face](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1))) show particular promise. MoE works by selectively activating different model components (experts), thereby optimizing resource usage.
4
 
5
  Recent studies ([Zhang el al., 2021](https://arxiv.org/abs/2110.01786); [Liu et al., 2023](https://openreview.net/pdf?id=wIPIhHd00i); [Mirzadeh et al., 2023](https://arxiv.org/abs/2310.04564)) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
6
 
7
- However, the widespread adoption of ReLU-based models in the LLM field remains limited. Here we introduce a new 7B ReLU-based LLM, Bamboo, which boasts nearly 85% sparsity and performance levels on par with [Mistral]([mistralai/Mistral-7B-v0.1 · Hugging Face](https://huggingface.co/mistralai/Mistral-7B-v0.1)).
8
 
9
  ## Model Architecture
10
 
11
- As the ReGLU-based LLM has limited sparsity, for example, [ReLULLaMA]([SparseLLM/ReluLLaMA-7B · Hugging Face](https://huggingface.co/SparseLLM/ReluLLaMA-7B)) has just nearly 67% sparsity. To further push the model's sparsity, we add a relu component after GLU. So our FFN network works as follows:
12
 
13
  ```Python
14
  class BambooMLP(nn.Module):
 
1
  ## Introducation
2
 
3
+ Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM). Among various approaches, a mixture of experts (MoE) methods (exemplified by models such as [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)) show particular promise. MoE works by selectively activating different model components (experts), thereby optimizing resource usage.
4
 
5
  Recent studies ([Zhang el al., 2021](https://arxiv.org/abs/2110.01786); [Liu et al., 2023](https://openreview.net/pdf?id=wIPIhHd00i); [Mirzadeh et al., 2023](https://arxiv.org/abs/2310.04564)) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
6
 
7
+ However, the widespread adoption of ReLU-based models in the LLM field remains limited. Here we introduce a new 7B ReLU-based LLM, Bamboo, which boasts nearly 85% sparsity and performance levels on par with [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1).
8
 
9
  ## Model Architecture
10
 
11
+ As the ReGLU-based LLM has limited sparsity, for example, [ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) has just nearly 67% sparsity. To further push the model's sparsity, we add a relu component after GLU. So our FFN network works as follows:
12
 
13
  ```Python
14
  class BambooMLP(nn.Module):