Edit model card

Paper: https://arxiv.org/pdf/2310.06694.pdf
Code: https://github.com/princeton-nlp/LLM-Shearing
Models: Sheared-LLaMA-1.3B, Sheared-LLaMA-2.7B
Pruned Models without Continued Pre-training: Sheared-LLaMA-1.3B-Pruned, Sheared-LLaMA-2.7B-Pruned
Instruction-tuned Models: Sheared-LLaMA-1.3B-ShareGPT, Sheared-LLaMA-2.7B-ShareGPT

License: Must comply with license of Llama2 since it's a model derived from Llama2.

Sheared-LLaMA-1.3B-Pruned is the model pruned from meta-llama/Llama-2-7b-hf without continued pre-training. We used roughly 0.4B tokens to perform the pruning experiment. This model could be a good use to study

  • effective data mixtures for continued pre-training
  • comparisons to other pruning techniques
  • extensive evaluations to understand how pruning affects knowledge and reasoning capabilities of LLMs
Downloads last month
354
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including princeton-nlp/Sheared-LLaMA-1.3B-Pruned