princeton-nlp
commited on
Commit
•
b4f554d
1
Parent(s):
1634702
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
|
|
6 |
license: apache-2.0
|
7 |
---
|
8 |
|
9 |
-
Sheared-LLaMA-2.7B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from the [RedPajama dataset](https://github.com/togethercomputer/RedPajama-Data). We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded into huggingface via
|
10 |
|
11 |
```
|
12 |
model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-2.7B")
|
|
|
6 |
license: apache-2.0
|
7 |
---
|
8 |
|
9 |
+
Sheared-LLaMA-2.7B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputeub.com/togethercomputer/RedPajama-Data). We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded into huggingface via
|
10 |
|
11 |
```
|
12 |
model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-2.7B")
|