--- license: llama2 language: - en --- # ProSparse-LLaMA-2-13B-GGUF - Original model: [SparseLLM/ProSparse-LLaMA-2-13B](https://huggingface.co/SparseLLM/prosparse-llama-2-13b) - Converted & distributed by: [THUNLP](https://nlp.csai.tsinghua.edu.cn/), [ModelBest](modelbest.cn), and [PowerInfer](https://huggingface.co/PowerInfer) This model is the downstream distribution of [SparseLLM/ProSparse-LLaMA-2-13B](https://huggingface.co/SparseLLM/prosparse-llama-2-13b) in PowerInfer GGUF format consisting of the LLM model weights and predictor weights. ### Citation Please kindly cite using the following BibTeX: ```bibtex @article{song2024prosparse, title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models}, author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong}, year={2024}, journal={arXiv preprint arXiv:2402.13516}, url={https://arxiv.org/pdf/2402.13516.pdf} } ```