Raincleared's picture
Upload README.md with huggingface_hub
a7d71ec verified
metadata
license: llama2
language:
  - en

ProSparse-LLaMA-2-7B-GGUF

This model is the downstream distribution of SparseLLM/ProSparse-LLaMA-2-7B in PowerInfer GGUF format consisting of the LLM model weights and predictor weights.

Note: prosparse-llama-2-7b-clip15.gguf is a variant GGUF version with the same model but different activation predictors, which are trained with data only reserving top 15% activation values. Compared with prosparse-llama-2-7b.gguf, this variant has higher predicted sparsity and inference speed, but suffering from relatively lower activation recall.

Citation

Please kindly cite using the following BibTeX:

@article{song2024prosparse,
  title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models},
  author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong},
  year={2024},
  journal={arXiv preprint arXiv:2402.13516},
  url={https://arxiv.org/pdf/2402.13516.pdf}
}