princeton-nlp Felladrin commited on
Commit
ba9a6a9
1 Parent(s): 2ba06d5

Fix the model identifier (#2)

Browse files

- Fix the model identifier (7456015db2046f054595917a7e6801e6ce214639)


Co-authored-by: Victor Nogueira <Felladrin@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ License: Must comply with license of Pythia since it's a model derived from Pyth
10
  Sheared-Pythia-160m is a model pruned and further pre-trained from [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m). We dynamically load data from different domains in the Pile dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
11
 
12
  ```
13
- model = GPTNeoXForCausalLM.from_pretrained("princeton-nlp/Sheared-Pythia-140m")
14
  ```
15
 
16
  The model's overall performance is better than EleutherAI/pythia-160m.
 
10
  Sheared-Pythia-160m is a model pruned and further pre-trained from [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m). We dynamically load data from different domains in the Pile dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
11
 
12
  ```
13
+ model = GPTNeoXForCausalLM.from_pretrained("princeton-nlp/Sheared-Pythia-160m")
14
  ```
15
 
16
  The model's overall performance is better than EleutherAI/pythia-160m.