princeton-nlp commited on
Commit
59aab8d
1 Parent(s): e19f71e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -7,3 +7,44 @@ Sheared-LLaMA-1.3B is a model pruned and further pre-trained from [meta-llama/Ll
7
  ```
8
  model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
9
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ```
8
  model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
9
  ```
10
+
11
+ **Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
12
+ **Code**: https://github.com/princeton-nlp/LLM-Shearing
13
+ **Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
14
+
15
+ ---
16
+
17
+
18
+ ### Downstream Tasks
19
+
20
+ We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
21
+
22
+ | Model | # Pre-training Tokens | Average Performance |
23
+ | --- | --- | --- |
24
+ | LLaMA2-7B | 2T | 64.6 |
25
+
26
+ **1.3B**
27
+
28
+ | OPT-1.3B | 300B | 48.2 |
29
+ | --- | --- | --- |
30
+ | Pythia-1.4B | 300B | 48.9 |
31
+ | Sheared-LLaMA-1.3B | 50B | 51.0 |
32
+
33
+ **3B**
34
+
35
+ | OPT-2.7B | 300B | 51.4 |
36
+ | --- | --- | --- |
37
+ | Pythia-2.8B | 300B | 52.5 |
38
+ | INCITE-Base-3B | 800B | 54.7 |
39
+ | Open-LLaMA-3B-v1 | 1T | 55.1 |
40
+ | Open-LLaMA-3B-v2 | 1T | 55.7 |
41
+ | Sheared-LLaMA-2.7B | 50B | 56.7 |
42
+
43
+ ### Bibtex
44
+ ```
45
+ @article{xia2023sheared,
46
+ title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
47
+ author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
48
+ year={2023}
49
+ }
50
+ ```