princeton-nlp commited on
Commit
5617df2
1 Parent(s): bd50356

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -1,3 +1,23 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ Paper: https://arxiv.org/pdf/2310.06694.pdf
6
+ Code: https://github.com/princeton-nlp/LLM-Shearing
7
+
8
+ License: Must comply with license of Pythia since it's a model derived from Pythia.
9
+
10
+ Sheared-Pythia-160m is a model pruned and further pre-trained from [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m). We dynamically load data from different domains in the Pile dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
11
+
12
+ model = GPTNeoXForCausalLM.from_pretrained("princeton-nlp/Sheared-Pythia-140m")
13
+
14
+ The model's overall performance is better than EleutherAI/pythia-160m.
15
+
16
+ ## Bibtex
17
+ ```
18
+ @article{xia2023sheared,
19
+ title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
20
+ author={Xia, Mengzhou and Gao, Tianyu, and Zeng, Zhiyuan and Chen, Danqi},
21
+ year={2023}
22
+ }
23
+ ```