text
stringlengths 84
944
|
---|
page_content='BLOOM 0.455 0.8 0448 0.79 0.617 0.704 0.677\nDejavu-BLOOM 0.448 0.8 0.44 0.787 0.606 0.710 0.675\n0 20 40 60 80 96\nTransformer Layer0.50.60.70.80.91.0Union Contextual Sparsity\nBatch size\n2\n4\n8\n16\n32\nFigure 8. Union contextual sparsity with larger batch size.\nbecause the predictor has validation accuracy over 99% in the\nshallow layers and drops to around 93% in the ending layers.\nContextual sparsity on attention blocks: In this section,\nwe study the sparse predictor for the Attention block on OPT-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='175B and leave the MLP block as dense computation. Table 4\ndisplays the test accuracy on zero-shot tasks and perplexity on\nthe language modeling datasets. In summary, the Attention\nsparse predictor introduces no accuracy loss at around 50%\nsparsity. During the training of the Attention sparse predictor,\nwe observe different trends compared to the MLP sparse\npredictor. The validation accuracy is around 93% in the\nmiddle layers and near 99% in the shallow and deep layers.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='Contextual Sparsity on Smaller Models: Our main exper-\niments focus on OPT-175B. Here, we verify DEJAVU ’s effec-\ntiveness on a smaller model, specifically OPT-66B. In Table 5,\nwe summarize the accuracy on zero-shot task at 50% sparsity.\nSimilar to DEJAVU -OPT-175B, we notice no accuracy loss.\nContextual Sparsity on Other Models: We expand\nthe evaluation to another model family. In Table 6, we\nsummarize the accuracy at attention sparsity 50% and MLP' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='sparsity 30%. Similar to OPT family, we notice no accuracy\nloss. The lower sparsity level in MLP is due to the difference\nin activation function.\nNon-Contextual Sparsity: As we mentioned in Section 1,\none could predict sparsity without contextual information.\nFor non-contextual sparsity, we rely on the originalTable 7. DEJAVU -OPT-175B with 4-bit quantization.\nCB COPA OpenBookQA PIQA RTE Winogrande Lambada\nOPT-175B 0.352 0.86 0.446 0.809 0.602 0.726 0.758' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='Dejavu-OPT-175B 0.402 0.85 0.450 0.802 0.592 0.726 0.753\nOPT-175B + W4A16 0.356 0.85 0.44 0.806 0.574 0.714 0.757\nDejavu-OPT-175B + W4A16 0.365 0.86 0.452 0.805 0.592 0.726 0.754\nembedding at the input layer. At every block, we first pass\nthe original embedding to record a subset of parameters\nyielding a large norm. In the second pass, the embedding\nat every layer only uses the recorded subset. As shown in\nFigure 1, non-contextual prediction is not sufficient and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='leads to accuracy losses even at 50% sparsity. This result\nverifies our design choices of relying on the activation at\nevery layer as input to make contextual sparsity predictions.\nCompatibility with Quantization: Quantization is another\npromising direction for efficient language models. We inves-\ntigate the possibility of combining contextual sparsity with\nquantization techniques. For DEJAVU -OPT-175B, we set\nthe entire model sparsity at 75%. For quantization, we apply' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='4-bit quantization on model weights (W4A16). As shown\nin Table 7, the combination of quantization and DEJAVU\nalmost always achieves better accuracy than DEJAVU or\nquantization alone. This suggests that the approximation\nerrors from these two directions do not get compounded.\n6 Conclusion\nOur main goal is to make LLM inference efficient so that\ntheir powerful in-context learning abilities can be used\nin more application domains. We observe that contextual' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='sparsity can be accurately predicted with lightweight\nlearning-based algorithms. This motivated us to design\nDEJAVU that uses asynchronous lookahead predictors and\nhardware-efficient sparsity to speed up LLM inference in\nwall-clock time. Our encouraging empirical results validate\nthat contextual sparsity can reduce inference latency by\nover 2×compared to the state-of-the-art FasterTransformer\nwithout model quality drops. Our method is a step towards' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='making LLMs more accessible to the general community,\nwhich could unlock exciting new AI applications.\nAcknowledgements\nWe would like to thank Ryan Spring, Laurel Orr, Guangxuan\nXiao, Eric Han, Xun Huang, Daniel Y . Fu, Benjamin Spector,\nRuan Silva, Diana Liskovich, and the anonymous reviewers\nfor helpful discussions and feedback. We acknowledge the\ngenerous support by Together Computer, which enabled the\nnecessary partial computations in this work.\n9' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nReferences\nWinogrande: An adversarial winograd schema challenge\nat scale. 2019.\nAllen-Zhu, Z. and Li, Y . What can resnet learn efficiently,\ngoing beyond kernels? Advances in Neural Information\nProcessing Systems , 32, 2019.\nAlman, J. and Song, Z. Fast attention requires bounded\nentries. arXiv preprint arXiv:2302.13214 , 2023.\nAlman, J., Liang, J., Song, Z., Zhang, R., and Zhuo, D. By-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='pass exponential time preprocessing: Fast neural network\ntraining via weight-data correlation preprocessing. arXiv\npreprint arXiv:2211.14227 , 2022.\nAlon, N., Matias, Y ., and Szegedy, M. The space complexity\nof approximating the frequency moments. In Proceedings\nof the twenty-eighth annual ACM symposium on Theory\nof computing , pp. 20–29, 1996.\nAminabadi, R. Y ., Rajbhandari, S., Awan, A. A., Li, C.,\nLi, D., Zheng, E., Ruwase, O., Smith, S., Zhang, M.,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='Rasley, J., et al. Deepspeed-inference: Enabling efficient\ninference of transformer models at unprecedented scale.\nIn2022 SC22: International Conference for High Per-\nformance Computing, Networking, Storage and Analysis\n(SC), pp. 646–660. IEEE Computer Society, 2022.\nAndoni, A. and Razenshteyn, I. Optimal data-dependent\nhashing for approximate near neighbors. In Proceedings\nof the forty-seventh annual ACM symposium on Theory\nof computing (STOC) , pp. 793–801, 2015.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='Andoni, A., Indyk, P., Nguyen, H. L., and Razenshteyn, I.\nBeyond locality-sensitive hashing. In Proceedings of the\ntwenty-fifth annual ACM-SIAM symposium on Discrete\nalgorithms , pp. 1018–1028. SIAM, 2014.\nAndoni, A., Indyk, P., Laarhoven, T., Razenshteyn, I., and\nSchmidt, L. Practical and optimal lsh for angular distance.\nInAdvances in Neural Information Processing Systems\n(NIPS) , pp. 1225–1233. Curran Associates, 2015.\nAndoni, A., Laarhoven, T., Razenshteyn, I., and Waingarten,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='E. Optimal hashing-based time-space trade-offs for\napproximate near neighbors. In Proceedings of the\nTwenty-Eighth Annual ACM-SIAM Symposium on\nDiscrete Algorithms (SODA) , pp. 47–66. SIAM, 2017.\nAndoni, A., Indyk, P., and Razenshteyn, I. Approximate\nnearest neighbor search in high dimensions. arXiv\npreprint arXiv:1806.09823 , 7, 2018.\nArya, S. and Mount, D. M. Approximate nearest neighbor\nqueries in fixed dimensions. In SODA , volume 93, pp.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='271–280. Citeseer, 1993.Balduzzi, D., Frean, M., Leary, L., Lewis, J., Ma, K. W.-D.,\nand McWilliams, B. The shattered gradients problem:\nIf resnets are the answer, then what is the question? In\nInternational Conference on Machine Learning , pp.\n342–350. PMLR, 2017.\nBansal, H., Gopalakrishnan, K., Dingliwal, S., Bodapati, S.,\nKirchhoff, K., and Roth, D. Rethinking the role of scale for\nin-context learning: An interpretability-based case study' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='at 66 billion scale. arXiv preprint arXiv:2212.09095 , 2022.\nBaum, L. E. and Petrie, T. Statistical inference for proba-\nbilistic functions of finite state markov chains. The annals\nof mathematical statistics , 37(6):1554–1563, 1966.\nBello, I., Fedus, W., Du, X., Cubuk, E. D., Srinivas, A., Lin,\nT.-Y ., Shlens, J., and Zoph, B. Revisiting resnets: Im-\nproved training and scaling strategies. Advances in Neural\nInformation Processing Systems , 34:22614–22627, 2021.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='Bengio, Y ., Ducharme, R., Vincent, P., and Jauvin, C. A\nneural probabilistic language model. Journal of machine\nlearning research (JMLR) , 3(Feb):1137–1155, 2003.\nBisk, Y ., Zellers, R., Bras, R. L., Gao, J., and Choi, Y .\nPiqa: Reasoning about physical commonsense in natural\nlanguage. In Thirty-Fourth AAAI Conference on Artificial\nIntelligence , 2020.\nBlack, S., Biderman, S., Hallahan, E., Anthony, Q., Gao,\nL., Golding, L., He, H., Leahy, C., McDonell, K., Phang,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L.,\nTow, J., Wang, B., and Weinbach, S. GPT-NeoX-20B:\nAn open-source autoregressive language model. In\nProceedings of the ACL Workshop on Challenges &\nPerspectives in Creating Large Language Models , 2022.\nURLhttps://arxiv.org/abs/2204.06745 .\nBommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora,\nS., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A.,\nBrunskill, E., et al. On the opportunities and risks of foun-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='dation models. arXiv preprint arXiv:2108.07258 , 2021.\nBoutsidis, C., Woodruff, D. P., and Zhong, P. Optimal\nprincipal component analysis in distributed and streaming\nmodels. In STOC’16—Proceedings of the 48th Annual\nACM SIGACT Symposium on Theory of Computing , 2016.\nBoytsov, L., Novak, D., Malkov, Y ., and Nyberg, E. Off the\nbeaten path: Let’s replace term-based retrieval with k-nn\nsearch. In Proceedings of the 25th ACM international on\nconference on information and knowledge management' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='(CIKM) , pp. 1099–1108, 2016.\nBrand, J. v. d., Peng, B., Song, Z., and Weinstein, O. Training\n(overparametrized) neural networks in near-linear time.\nInITCS , 2021.\n10' metadata={'source': 'pdfs/paper_3.pdf', 'page': 9} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nBrand, J. v. d., Song, Z., and Zhou, T. Algorithm and\nhardness for dynamic attention maintenance in large\nlanguage models. arXiv preprint arXiv:2304.02207 , 2023.\nBrown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,\nDhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,\nAskell, A., et al. Language models are few-shot learners.\nAdvances in neural information processing systems , 33:\n1877–1901, 2020.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Chan, S. C., Santoro, A., Lampinen, A. K., Wang, J. X.,\nSingh, A. K., Richemond, P. H., McClelland, J., and\nHill, F. Data distributional properties drive emergent\nin-context learning in transformers. In Advances in Neural\nInformation Processing Systems , 2022.\nChang, W.-C., Yu, F. X., Chang, Y .-W., Yang, Y ., and Kumar,\nS. Pre-training tasks for embedding-based large-scale\nretrieval. arXiv preprint arXiv:2002.03932 , 2020.\nCharikar, M., Chen, K., and Farach-Colton, M. Finding' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='frequent items in data streams. In International Collo-\nquium on Automata, Languages, and Programming , pp.\n693–703. Springer, 2002.\nChen, B., Xu, Y ., and Shrivastava, A. Fast and accurate\nstochastic gradient estimation. Advances in Neural\nInformation Processing Systems , 32, 2019.\nChen, B., Medini, T., Farwell, J., Tai, C., Shrivastava, A., et al.\nSlide: In defense of smart algorithms over hardware accel-\neration for large-scale deep learning systems. Proceedings' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='of Machine Learning and Systems , 2:291–306, 2020a.\nChen, B., Dao, T., Winsor, E., Song, Z., Rudra, A., and Ré,\nC. Scatterbrain: Unifying sparse and low-rank attention.\nAdvances in Neural Information Processing Systems , 34:\n17413–17426, 2021a.\nChen, B., Liu, Z., Peng, B., Xu, Z., Li, J. L., Dao, T., Song,\nZ., Shrivastava, A., and Re, C. Mongoose: A learnable lsh\nframework for efficient neural network training. In Inter-\nnational Conference on Learning Representations , 2021b.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Chen, H., Chillotti, I., Dong, Y ., Poburinnaya, O., Razen-\nshteyn, I., and Riazi, M. S. {SANNS }: Scaling up\nsecure approximate k-nearest neighbors search. In 29th\n{USENIX }Security Symposium ( {USENIX }Security 20) ,\npp. 2111–2128, 2020b.\nChen, L. On the hardness of approximate and exact (bichro-\nmatic) maximum inner product. In 33rd Computational\nComplexity Conference (CCC) , 2018.\nCho, J. H. and Hariharan, B. On the efficacy of knowledge' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='distillation. In Proceedings of the IEEE/CVF international\nconference on computer vision , pp. 4794–4802, 2019.Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra,\nG., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,\nGehrmann, S., et al. PaLM: Scaling language modeling\nwith pathways. arXiv preprint arXiv:2204.02311 , 2022.\nClarkson, K. L. and Woodruff, D. P. Low-rank approximation\nand regression in input sparsity time. In STOC , 2013.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Cohen, M. B. Nearly tight oblivious subspace embeddings\nby trace inequalities. In Proceedings of the twenty-seventh\nannual ACM-SIAM symposium on Discrete algorithms ,\npp. 278–287. SIAM, 2016.\nCook, S. CUDA Programming: A Developer’s Guide to\nParallel Computing with GPUs . Morgan Kaufmann\nPublishers Inc., San Francisco, CA, USA, 1st edition,\n2012. ISBN 9780124159334.\nCox, M. and Cox, T. Multidimensional scaling, 315–347.\nHandbook of data visualization. Springer, Berlin,\nGermany , 2008.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Dao, T., Fu, D. Y ., Ermon, S., Rudra, A., and Ré, C.\nFlashattention: Fast and memory-efficient exact attention\nwith io-awareness. In Advances in Neural Information\nProcessing Systems , 2022.\nDatar, M., Immorlica, N., Indyk, P., and Mirrokni, V . S.\nLocality-sensitive hashing scheme based on p-stable distri-\nbutions. In Proceedings of the twentieth annual symposium\non Computational geometry (SoCG) , pp. 253–262, 2004.\nde Marneffe, M.-C., Simons, M., and Tonhauser, J. The' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='commitmentbank: Investigating projection in naturally\noccurring discourse. 2019.\nDeng, Y ., Li, Z., and Song, Z. Attention scheme inspired soft-\nmax regression. arXiv preprint arXiv:2304.10411 , 2023a.\nDeng, Y ., Mahadevan, S., and Song, Z. Randomized and\ndeterministic attention sparsification algorithms for\nover-parameterized feature dimension. arxiv preprint:\narxiv 2304.03426 , 2023b.\nDerpanis, K. G. Mean shift clustering. Lecture Notes , 32:\n1–4, 2005.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Dettmers, T., Lewis, M., Belkada, Y ., and Zettlemoyer, L.\nLlm. int8 (): 8-bit matrix multiplication for transformers\nat scale. arXiv preprint arXiv:2208.07339 , 2022.\nDong, S., Lee, Y . T., and Ye, G. A nearly-linear time\nalgorithm for linear programs with small treewidth: A\nmultiscale representation of robust central path. In\nProceedings of the 53rd Annual ACM SIGACT Symposium\non Theory of Computing , pp. 1784–1797, 2021.\nDong, Y ., Indyk, P., Razenshteyn, I., and Wagner, T. Learning' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='space partitions for nearest neighbor search. In Interna-\ntional Conference on Learning Representations , 2019.\n11' metadata={'source': 'pdfs/paper_3.pdf', 'page': 10} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nFang, J., Yu, Y ., Zhao, C., and Zhou, J. Turbotransformers:\nan efficient gpu serving system for transformer models.\nInProceedings of the 26th ACM SIGPLAN Symposium\non Principles and Practice of Parallel Programming , pp.\n389–402, 2021.\nFrankle, J. and Carbin, M. The lottery ticket hypothesis:\nFinding sparse, trainable neural networks. arXiv preprint\narXiv:1803.03635 , 2018.\nFrantar, E. and Alistarh, D. Massive language models' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='can be accurately pruned in one-shot. arXiv preprint\narXiv:2301.00774 , 2023.\nFrantar, E., Ashkboos, S., Hoefler, T., and Alistarh, D. Gptq:\nAccurate post-training quantization for generative pre-\ntrained transformers. arXiv preprint arXiv:2210.17323 ,\n2022.\nFrei, S., Cao, Y ., and Gu, Q. Algorithm-dependent gen-\neralization bounds for overparameterized deep residual\nnetworks. Advances in neural information processing\nsystems , 32, 2019.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='Gao, L., Tow, J., Biderman, S., Black, S., DiPofi, A., Foster,\nC., Golding, L., Hsu, J., McDonell, K., Muennighoff, N.,\nPhang, J., Reynolds, L., Tang, E., Thite, A., Wang, B.,\nWang, K., and Zou, A. A framework for few-shot lan-\nguage model evaluation, September 2021. URL https:\n//doi.org/10.5281/zenodo.5371628 .\nGao, Y ., Mahadevan, S., and Song, Z. An over-parameterized\nexponential regression. arXiv preprint arXiv:2303.16504 ,\n2023a.\nGao, Y ., Song, Z., and Yang, X. Differentially private' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='attention computation. arXiv preprint arXiv:2305.04701 ,\n2023b.\nGiampiccolo, D., Magnini, B., Dagan, I., and Dolan, B. The\nthird PASCAL recognizing textual entailment challenge.\nInProceedings of the ACL-PASCAL Workshop on Textual\nEntailment and Paraphrasing , pp. 1–9, Prague, June\n2007. Association for Computational Linguistics. URL\nhttps://aclanthology.org/W07-1401 .\nGionis, A., Indyk, P., Motwani, R., et al. Similarity search\nin high dimensions via hashing. In Vldb , volume 99, pp.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='518–529, 1999.\nGordon, A., Kozareva, Z., and Roemmele, M. SemEval-2012\ntask 7: Choice of plausible alternatives: An evaluation\nof commonsense causal reasoning. In *SEM 2012: The\nFirst Joint Conference on Lexical and Computational Se-\nmantics – Volume 1: Proceedings of the main conference\nand the shared task, and Volume 2: Proceedings of the\nSixth International Workshop on Semantic Evaluation\n(SemEval 2012) , pp. 394–398, Montréal, Canada, 7-8June 2012. Association for Computational Linguistics.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='URLhttps://aclanthology.org/S12-1052 .\nGu, Y . and Song, Z. A faster small treewidth sdp solver.\narXiv preprint arXiv:2211.06033 , 2022.\nGu, Y ., Song, Z., Yin, J., and Zhang, L. Low rank matrix\ncompletion via robust alternating minimization in nearly\nlinear time. arXiv preprint arXiv:2302.11068 , 2023.\nHall, R. and Attenberg, J. Fast and accurate maximum inner\nproduct recommendations on map-reduce. In Proceedings\nof the 24th International Conference on World Wide Web' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='(WWW) , pp. 1263–1268, 2015.\nHan, S., Mao, H., and Dally, W. J. Deep compression:\nCompressing deep neural networks with pruning, trained\nquantization and huffman coding. arXiv preprint\narXiv:1510.00149 , 2015.\nHarris, M. How to access global memory efficiently in\nCUDA C/C++ kernels. NVIDIA, Jan , 2013.\nHe, K., Zhang, X., Ren, S., and Sun, J. Deep residual\nlearning for image recognition. In Proceedings of\nthe IEEE conference on computer vision and pattern\nrecognition , pp. 770–778, 2016.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='He, Y ., Liu, P., Wang, Z., Hu, Z., and Yang, Y . Filter pruning\nvia geometric median for deep convolutional neural\nnetworks acceleration. In Proceedings of the IEEE/CVF\nconference on computer vision and pattern recognition ,\npp. 4340–4349, 2019.\nHinton, G., Vinyals, O., Dean, J., et al. Distilling\nthe knowledge in a neural network. arXiv preprint\narXiv:1503.02531 , 2(7), 2015.\nHoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., and Peste,\nA. Sparsity in deep learning: Pruning and growth for' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='efficient inference and training in neural networks. J.\nMach. Learn. Res. , 22(241):1–124, 2021.\nHooker, S. The hardware lottery. Communications of the\nACM , 64(12):58–65, 2021.\nHu, H., Song, Z., Weinstein, O., and Zhuo, D. Training\noverparametrized neural networks in sublinear time.\narXiv preprint arXiv:2208.04508 , 2022.\nIndyk, P. and Motwani, R. Approximate nearest neighbors:\ntowards removing the curse of dimensionality. In\nProceedings of the thirtieth annual ACM symposium on' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='Theory of computing (STOC) , pp. 604–613, 1998a.\nIndyk, P. and Motwani, R. Approximate nearest neighbors:\ntowards removing the curse of dimensionality. In\nProceedings of the thirtieth annual ACM symposium on\nTheory of computing , pp. 604–613, 1998b.\n12' metadata={'source': 'pdfs/paper_3.pdf', 'page': 11} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nIndyk, P. and Wagner, T. Approximate nearest neighbors\nin limited space. In Conference On Learning Theory , pp.\n2012–2036. PMLR, 2018.\nIvanov, A., Dryden, N., Ben-Nun, T., Li, S., and Hoefler,\nT. Data movement is all you need: A case study on\noptimizing transformers. Proceedings of Machine\nLearning and Systems , 3:711–732, 2021.\nJacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='A., Adam, H., and Kalenichenko, D. Quantization and\ntraining of neural networks for efficient integer-arithmetic-\nonly inference. In Proceedings of the IEEE conference on\ncomputer vision and pattern recognition , pp. 2704–2713,\n2018.\nJiang, S., Song, Z., Weinstein, O., and Zhang, H. A faster\nalgorithm for solving general lps. In Proceedings of the\n53rd Annual ACM SIGACT Symposium on Theory of\nComputing , pp. 823–832, 2021.\nJohnson, J., Douze, M., and Jégou, H. Billion-scale' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='similarity search with GPUs. IEEE Transactions on Big\nData , 7(3):535–547, 2019.\nJohnson, W. B. and Lindenstrauss, J. Extensions of\nlipschitz mappings into a hilbert space. Contemporary\nmathematics , 26(189-206):1, 1984.\nKitaev, N., Kaiser, Ł., and Levskaya, A. Reformer: The\nefficient transformer. In ICLR , 2020.\nKurtz, M., Kopinsky, J., Gelashvili, R., Matveev, A., Carr, J.,\nGoin, M., Leiserson, W., Moore, S., Shavit, N., and Alis-\ntarh, D. Inducing and exploiting activation sparsity for fast' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='inference on deep neural networks. In III, H. D. and Singh,\nA. (eds.), Proceedings of the 37th International Confer-\nence on Machine Learning , volume 119 of Proceedings\nof Machine Learning Research , pp. 5533–5543. PMLR,\n13–18 Jul 2020. URL https://proceedings.mlr.\npress/v119/kurtz20a.html .\nLaurent, B. and Massart, P. Adaptive estimation of a\nquadratic functional by model selection. Annals of\nStatistics , pp. 1302–1338, 2000.\nLeCun, Y ., Denker, J., and Solla, S. Optimal brain damage.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Advances in neural information processing systems , 2,\n1989.\nLee, M., He, X., Yih, W.-t., Gao, J., Deng, L., and Smolensky,\nP. Reasoning in vector space: An exploratory study of\nquestion answering. In ICLR , 2016.\nLee, N., Ajanthan, T., and Torr, P. H. Snip: Single-shot\nnetwork pruning based on connection sensitivity. arXiv\npreprint arXiv:1810.02340 , 2018.Lee, Y . T., Song, Z., and Zhang, Q. Solving empirical risk\nminimization in the current matrix multiplication time. In' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Conference on Learning Theory , pp. 2140–2157. PMLR,\n2019.\nLi, P., Li, X., and Zhang, C.-H. Re-randomized densification\nfor one permutation hashing and bin-wise consistent\nweighted sampling. Advances in Neural Information\nProcessing Systems , 32, 2019.\nLi, S., Song, Z., Xia, Y ., Yu, T., and Zhou, T. The closeness\nof in-context learning and weight shifting for softmax\nregression. arXiv preprint , 2023a.\nLi, X. and Li, P. C-MinHash: Improving minwise hashing' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='with circulant permutation. In Chaudhuri, K., Jegelka,\nS., Song, L., Szepesvari, C., Niu, G., and Sabato, S.\n(eds.), Proceedings of the 39th International Conference\non Machine Learning , volume 162 of Proceedings of\nMachine Learning Research , pp. 12857–12887. PMLR,\n17–23 Jul 2022. URL https://proceedings.mlr.\npress/v162/li22m.html .\nLi, Z., You, C., Bhojanapalli, S., Li, D., Rawat, A. S.,\nReddi, S. J., Ye, K., Chern, F., Yu, F., Guo, R., and\nKumar, S. Large models are parsimonious learners:' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Activation sparsity in trained transformers, 2022. URL\nhttps://arxiv.org/abs/2210.06313 .\nLi, Z., Song, Z., and Zhou, T. Solving regularized exp, cosh\nand sinh regression problems. arXiv preprint, 2303.15725 ,\n2023b.\nLiang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D.,\nYasunaga, M., Zhang, Y ., Narayanan, D., Wu, Y ., Kumar,\nA., et al. Holistic evaluation of language models. arXiv\npreprint arXiv:2211.09110 , 2022.\nLiu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Rethinking the value of network pruning. arXiv preprint\narXiv:1810.05270 , 2018.\nLiu, Z., Xu, Z., Ji, A., Zhang, J., Li, J., Chen, B., and\nShrivastava, A. Halos: Hashing large output space for\ncheap inference. Proceedings of Machine Learning and\nSystems , 4:110–125, 2022.\nLu, Y ., Dhillon, P., Foster, D. P., and Ungar, L. Faster ridge\nregression via the subsampled randomized hadamard\ntransform. In Advances in neural information processing\nsystems (NIPS) , pp. 369–377, 2013.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Malkov, Y ., Ponomarenko, A., Logvinov, A., and Krylov,\nV . Approximate nearest neighbor algorithm based on\nnavigable small world graphs. Information Systems , 45:\n61–68, 2014.\n13' metadata={'source': 'pdfs/paper_3.pdf', 'page': 12} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nMalkov, Y . A. and Yashunin, D. A. Efficient and robust ap-\nproximate nearest neighbor search using hierarchical nav-\nigable small world graphs. IEEE transactions on pattern\nanalysis and machine intelligence , 42(4):824–836, 2018.\nMeng, X. and Mahoney, M. W. Low-distortion subspace\nembeddings in input-sparsity time and applications to\nrobust linear regression. In Proceedings of the forty-fifth' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='annual ACM symposium on Theory of computing , pp.\n91–100, 2013.\nMerity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer\nsentinel mixture models, 2016.\nMichel, P., Levy, O., and Neubig, G. Are sixteen heads\nreally better than one? Advances in neural information\nprocessing systems , 32, 2019.\nMihaylov, T., Clark, P., Khot, T., and Sabharwal, A. Can a\nsuit of armor conduct electricity? a new dataset for open\nbook question answering. In EMNLP , 2018.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M.,\nHajishirzi, H., and Zettlemoyer, L. Rethinking the role\nof demonstrations: What makes in-context learning work?\narXiv preprint arXiv:2202.12837 , 2022.\nMolchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J.\nPruning convolutional neural networks for resource ef-\nficient inference. arXiv preprint arXiv:1611.06440 , 2016.\nNagel, M., Baalen, M. v., Blankevoort, T., and Welling,\nM. Data-free quantization through weight equalization' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='and bias correction. In Proceedings of the IEEE/CVF\nInternational Conference on Computer Vision , pp.\n1325–1334, 2019.\nNelson, J. and Nguyên, H. L. Osnap: Faster numerical linear\nalgebra algorithms via sparser subspace embeddings.\nIn2013 ieee 54th annual symposium on foundations of\ncomputer science , pp. 117–126. IEEE, 2013.\nNeyshabur, B. and Srebro, N. On symmetric and asymmetric\nlshs for inner product search. In International Conference\non Machine Learning (ICML) , pp. 1926–1934. PMLR,\n2015.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='NVIDIA. Fastertransformer. https://github.com/\nNVIDIA/FasterTransformer .\nNVIDIA. Gpu performance background user’s\nguide, 2022. URL https://docs.nvidia.\ncom/deeplearning/performance/\ndl-performance-gpu-background/index.\nhtml .\nPark, G., Park, B., Kwon, S. J., Kim, B., Lee, Y ., and Lee,\nD. nuqmm: Quantized matmul for efficient inference of\nlarge-scale generative language models. arXiv preprint\narXiv:2206.09557 , 2022.Pope, R., Douglas, S., Chowdhery, A., Devlin, J., Bradbury,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='J., Levskaya, A., Heek, J., Xiao, K., Agrawal, S., and\nDean, J. Efficiently scaling transformer inference. arXiv\npreprint arXiv:2211.05102 , 2022.\nQin, L., Song, Z., and Wang, Y . Fast submodular function\nmaximization. CoRR , abs/2305.08367, 2023a.\nQin, L., Song, Z., Zhang, L., and Zhuo, D. An online and\nunified algorithm for projection matrix vector multipli-\ncation with application to empirical risk minimization. In\nAISTATS , 2023b.\nRadford, A., Wu, J., Child, R., Luan, D., Amodei, D., and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='Sutskever, I. Language models are unsupervised multitask\nlearners. 2019.\nRaffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,\nMatena, M., Zhou, Y ., Li, W., and Liu, P. J. Exploring\nthe limits of transfer learning with a unified text-to-text\ntransformer. arXiv e-prints , 2019.\nRazenshteyn, I., Song, Z., and Woodruff, D. P. Weighted\nlow rank approximations with provable guarantees. In\nProceedings of the forty-eighth annual ACM symposium\non Theory of Computing , pp. 250–263, 2016.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='Sarlos, T. Improved approximation algorithms for large\nmatrices via random projections. In 2006 47th annual\nIEEE symposium on foundations of computer science\n(FOCS) , pp. 143–152. IEEE, 2006.\nSeo, M., Lee, J., Kwiatkowski, T., Parikh, A. P., Farhadi,\nA., and Hajishirzi, H. Real-time open-domain question\nanswering with dense-sparse phrase index. In ACL, pp.\n4430–4441, 2019.\nShrivastava, A., Song, Z., and Xu, Z. Sublinear least-squares\nvalue iteration via locality sensitive hashing. arXiv' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='preprint arXiv:2105.08285 , 2021.\nSmith, J. E. A study of branch prediction strategies. In\n25 years of the international symposia on Computer\narchitecture (selected papers) , pp. 202–215, 1998.\nSohler, C. and Woodruff, D. P. Subspace embeddings\nfor the l1-norm with applications. In Proceedings of\nthe forty-third annual ACM symposium on Theory of\ncomputing , pp. 755–764, 2011.\nSong, Z. and Ye, M. Efficient asynchronize stochas-\ntic gradient algorithm with structured data. CoRR ,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='abs/2305.08001, 2023.\nSong, Z. and Yu, Z. Oblivious sketching-based central path\nmethod for linear programming. In International Confer-\nence on Machine Learning , pp. 9835–9847. PMLR, 2021.\n14' metadata={'source': 'pdfs/paper_3.pdf', 'page': 13} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nSong, Z., Woodruff, D. P., and Zhong, P. Low rank approx-\nimation with entrywise l1-norm error. In Proceedings of\nthe 49th Annual ACM SIGACT Symposium on Theory of\nComputing , pp. 688–701, 2017.\nSong, Z., Woodruff, D. P., and Zhong, P. Relative error\ntensor low rank approximation. In Proceedings of the\nThirtieth Annual ACM-SIAM Symposium on Discrete\nAlgorithms (SODA) , pp. 2772–2789. SIAM, 2019.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='Song, Z., Zhang, L., and Zhang, R. Training multi-layer\nover-parametrized neural network in subquadratic time.\narXiv preprint arXiv:2112.07628 , 2021.\nSong, Z., Wang, W., and Yin, C. Fast and efficient\nmatching algorithm with deadline instances. CoRR ,\nabs/2305.08353, 2023a.\nSong, Z., Yang, X., Yang, Y ., and Zhang, L. Sketching meets\ndifferential privacy: fast algorithm for dynamic kronecker\nprojection maintenance. In International Conference on\nMachine Learning (ICML) , 2023b.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='Tang, R., Lu, Y ., Liu, L., Mou, L., Vechtomova, O., and Lin,\nJ. Distilling task-specific knowledge from bert into simple\nneural networks. arXiv preprint arXiv:1903.12136 , 2019.\nTillet, P., Kung, H.-T., and Cox, D. Triton: an interme-\ndiate language and compiler for tiled neural network\ncomputations. In Proceedings of the 3rd ACM SIGPLAN\nInternational Workshop on Machine Learning and\nProgramming Languages , pp. 10–19, 2019.\nTouvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='A., and Jégou, H. Training data-efficient image trans-\nformers & distillation through attention. In International\nConference on Machine Learning , pp. 10347–10357.\nPMLR, 2021.\nVeit, A., Wilber, M. J., and Belongie, S. Residual networks\nbehave like ensembles of relatively shallow networks. Ad-\nvances in neural information processing systems , 29, 2016.\nViterbi, A. Error bounds for convolutional codes and an\nasymptotically optimum decoding algorithm. IEEE trans-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='actions on Information Theory , 13(2):260–269, 1967.\nWang, B. and Komatsuzaki, A. GPT-J-6B: A\n6 billion parameter autoregressive language\nmodel. https://github.com/kingoflolz/\nmesh-transformer-jax , May 2021.\nWang, R. and Woodruff, D. P. Tight bounds for lp oblivious\nsubspace embeddings. 2018.\nWang, X., Xiong, Y ., Wei, Y ., Wang, M., and Li, L. Lightseq:\nA high performance inference library for transformers.\nInProceedings of the 2021 Conference of the North' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='American Chapter of the Association for ComputationalLinguistics: Human Language Technologies: Industry\nPapers , pp. 113–120, 2021.\nWoodruff, D. P. Sketching as a tool for numerical linear\nalgebra. Foundations and Trends ®in Theoretical\nComputer Science , 10(1–2):1–157, 2014.\nXiao, G., Lin, J., Seznec, M., Demouth, J., and Han, S.\nSmoothquant: Accurate and efficient post-training\nquantization for large language models. arXiv preprint\narXiv:2211.10438 , 2022.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='Xie, S. M., Raghunathan, A., Liang, P., and Ma, T.\nAn explanation of in-context learning as implicit\nbayesian inference. In International Conference\non Learning Representations , 2022. URL https:\n//openreview.net/forum?id=RdJVFCHjUMI .\nXue, H.-J., Dai, X., Zhang, J., Huang, S., and Chen, J. Deep\nmatrix factorization models for recommender systems.\nInIJCAI , pp. 3203–3209, 2017.\nYao, Z., Aminabadi, R. Y ., Zhang, M., Wu, X., Li, C., and\nHe, Y . Zeroquant: Efficient and affordable post-training' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='quantization for large-scale transformers. arXiv preprint\narXiv:2206.01861 , 2022.\nYu, G.-I., Jeong, J. S., Kim, G.-W., Kim, S., and Chun, B.-G.\nOrca: A distributed serving system for {Transformer-\nBased}generative models. In 16th USENIX Symposium\non Operating Systems Design and Implementation (OSDI\n22), pp. 521–538, 2022.\nZandieh, A., Han, I., Daliri, M., and Karbasi, A. Kdeformer:\nAccelerating transformers via kernel density estimation.\narXiv preprint arXiv:2302.02451 , 2023.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='Zhang, L. Speeding up optimizations via data structures:\nFaster search, sample and maintenance. Master’s thesis,\nCarnegie Mellon University, 2022.\nZhang, M., Wang, W., Liu, X., Gao, J., and He, Y . Navi-\ngating with graph representations for fast and scalable\ndecoding of neural language models. Advances in neural\ninformation processing systems , 31, 2018.\nZhao, R., Hu, Y ., Dotzel, J., De Sa, C., and Zhang, Z.\nImproving neural network quantization without retraining' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='using outlier channel splitting. In International conference\non machine learning , pp. 7543–7552. PMLR, 2019.\n15' metadata={'source': 'pdfs/paper_3.pdf', 'page': 14} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nContents: In Section A, we present an extended discussion on LLM inference and related works. In Section B, we provide\nmore observation plots for slowly changing activation and further observation on the possibility of sparsifying LLMs via layer\nskipping. In Section C, we provide experiment details. In Section D, we demonstrate implementation details. In Section E,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='we provide detailed benchmarks regarding our implementation. In Section F, we define some basic notations and definitions.\nIn Section G, we define subspace embedding and show the norm preserving. In Section H, we introduce distances, angles, and\ninner product. In Section I, we provide the distance between different functions. In Section J, we provide the Near-neighbor\nSearch data structure. In Section K, we discuss self-attention as a clustering algorithm in depth.\nA Related Work' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='Generative LLM inference. Taking OPT-175B as an example, assume 6 A100 80GB PCIe, based on the hardware\nspecifications, we compare two main phases of inference time LLM, namely prompting and token generation in Table 1, and\ntwo major components, namely Multi-Head-Attention block and MLP block in Table 2. In practice, the token generation\nphase usually dominates the end-to-end test latency due to IO latency. Generating only two tokens is about the same latency as' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='prompting. Further, during token generation, the MLP block is 2 ×more expensive in both FLOPs and IO access. The hardware\nis often at low utilization because memory reads and writes are more limited on modern hardware than tensor core computation.\nGiven the rapid development of LLM, there is an emergence of systems that are specialized for LLM inference, such as\nFaster Transformer (NVIDIA), Orca (Yu et al., 2022), LightSeq (Wang et al., 2021), PaLM inference (Pope et al., 2022),' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='TurboTransformers (Fang et al., 2021), and Deepspeed-Inference (Aminabadi et al., 2022). In practice, the token generation\nphase usually dominates the end-to-end inference time. Although the state-of-the-art systems introduce some helpful system\noptimizations for speedup, there is a lack of careful algorithm and system co-design to unleash the full potential of hardware\nefficiency during the LLM inference computation.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='Near-neighbor Search for Efficient Deep Neural Networks. Near-neighbor Search is a well-studied problem with wide\napplications in recommendation system (Xue et al., 2017; Hall & Attenberg, 2015), question answering (Boytsov et al., 2016;\nSeo et al., 2019; Chang et al., 2020) and natural language processing (Bengio et al., 2003; Lee et al., 2016). There has been a\nline of work using Near-neighbor Search techniques such as Locality-sensitive hashing (Gionis et al., 1999) and Graph-based' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='indexing (Malkov et al., 2014) for efficient deep neural network training or inference (Zhang et al., 2018; Chen et al., 2019;\n2020a; Kitaev et al., 2020; Chen et al., 2021b;a; Liu et al., 2022).\nQuantization, pruning, distillation for LLM inference. Various system relaxations have been studied for decades for\nmodel inference in machine learning. For example, quantization (Han et al., 2015; Jacob et al., 2018; Nagel et al., 2019; Zhao' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='et al., 2019), pruning (Molchanov et al., 2016; Liu et al., 2018; He et al., 2019; Hoefler et al., 2021), and distillation (Hinton\net al., 2015; Cho & Hariharan, 2019; Tang et al., 2019; Touvron et al., 2021) have been applied to speed up the inference of\nthe machine learning model. Active research has recently attempted to apply such techniques in LLM inference. For example,\nzeroQuant (Yao et al., 2022) and nuQmm (Park et al., 2022) implement customized CUDA kernels to support tenor-wise' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='or group-wise quantization for LLM inference; LLM.int8 (Dettmers et al., 2022) adopts a mixed INT8/FP16 computation to\ndiminish the influence of activation outliers; SmoothQuant (Xiao et al., 2022) enables efficient 8-bit weight and activation for\nLLM inference; GPTQ (Frantar et al., 2022) adopts a one-shot weight quantization method based on approximate second-order\ninformation for accuracy and efficiency; SparseGPT (Frantar & Alistarh, 2023) introduces an approximate sparse regression' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='solver to enable the sparsity in LLM inference; (Bansal et al., 2022) has reported that a small set of attention heads can perform\nprimitive induction operations associated with in-context learning, and use this property to prune LLM for acceleration.\nResidual connections in neural networks. Residual connection shows great advantages for neural network generalization,\nit provides additional paths for activations to reach the latter parts of the neural network by skipping some layers (He et al.,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='2016). The advancement of residual connections can be viewed as ensembles of multiple shallow neural networks (Veit\net al., 2016). Plenty of active research has discussed the effectiveness of residual connections (Balduzzi et al., 2017; Bello\net al., 2021; Allen-Zhu & Li, 2019; Frei et al., 2019). However, as far as we know, there is no former work that leverages\nthe property of residual connections to improve the efficiency of LLM inference.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='B Additional Observation on Slowly Changing Observation\nFirst, we present more plots on the cosine similarity between representations. Figure 9 plots the cosine similarity between\nactivation across layers on OPT family. It is evident that similarity is high for the larger models.\nThere are two residual connections inside a transformer layer, one around the attention block, and the other one around the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='MLP block. The residual connection can be written as X+F(X), where Fis either the Multi-Head Attention or two MLP\nLayer. Figure 10 plots the cosine similarity between XandX+F(X), which is close to 1.0, and the cosine similarity between\n16' metadata={'source': 'pdfs/paper_3.pdf', 'page': 15} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 5 10 15 20\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8\n(a) OPT-1.3B\n0 510 15 20 25 30\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (b) OPT-6.7B\n05101520253035\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (c) OPT-13B\n0 10 20 30 40\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8\n(d) OPT-30B\n010 20 30 40 50 60' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='Transformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (e) OPT-66B\n0 20 40 60 80\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (f) OPT-175B\nFigure 9. Cosine similarity between layer land layer l+1for various model.\nXandF(X), which is close to 0.0. This happens because ∥X∥is significantly greater than ∥F(X)∥, shown in the purple.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='In the first layer, ∥F(X)∥is larger, which explains the low cosine similarity. The magnitude of the L2norm is different across\nmodels, however, we observe a similar trend with models of different sizes. There exists a normalization layer before F(X)\nand the layer normalization scale ∥X∥to a consistent magnitude across layers (e.g. 85 for OPT-30B, 110 for OPT175B),\nbut not necessarily scale down ∥X∥.\nC Additional Experiment Detail\nC.1 Large Batch Size' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='To help understand where the speed-up comes from when batch size is greater than 1, we present the Union Contextual Sparsity\n(fraction of neurons/heads that are not used by any of the inputs in the batch) of different batches sizes for MLP and Attention\nblocks, respectively, in Figure 11. Union Contextual Sparsity is calculated as 1.0 - the union of activated MLP neurons or\nAttention heads in the batch / total neurons or heads. The union operation is essential to realize a fast sparse GEMM.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='Surprisingly the number of MLP neurons/Attention heads that DEJAVU activated does not grow linearly with the batch size.\nThis suggests a power law distribution rather than a uniform distribution of parameter access from all input examples. Further,\na larger batch size can easily lead to out-of-memory for long sequence settings due to the limited GPU memory, the giant\nlarge model size, and the stored KV cache. For example, the total GPU memory of 8 80GB A100 is 640GB. Model parameters' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='are around 350GB for OPT175B. The KV cache for a batch size 32 with a sequence longer than 1920 tokens has already\nfilled up the GPU memory.\nC.2 Near Neighbor classifier\nIn the DEJAVU framework, any near-neighbor search method under the inner product metric would be sufficient to predict\na sparsity pattern. "Training predictor" is to reduce the cost of on-the-fly prediction, rather than training the model itself.\n17' metadata={'source': 'pdfs/paper_3.pdf', 'page': 16} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0.00.20.40.60.81.0Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 5 10 15 20\nTransformer Layer0.00.20.40.60.81.0Cosine SimilarityResidual At MLP\nCos(X, X+F(X))\nCos(X, F(X))020406080100120\nNorm||X||\n||F(X)||\n||LN(X)||\n020406080100120\nL2 Norm||X||\n||F(X)||\n||LN(X)||\n(a) OPT-1.3b\n0.00.20.40.60.81.01.2Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 5 10 15 20 25 30' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='Transformer Layer0.00.20.40.60.81.0Cosine SimilarityResidual At MLP\nCos(X, X+F(X))\nCos(X, F(X))050100150200250\nNorm||X||\n||F(X)||\n||LN(X)||\n050100150200250\nL2 Norm||X||\n||F(X)||\n||LN(X)|| (b) OPT-6.7b\n0.2\n0.00.20.40.60.81.0Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 10 20 30 40\nTransformer Layer0.00.20.40.60.81.0Cosine SimilarityResidual At MLP\nCos(X, X+F(X))\nCos(X, F(X))050100150200\nNorm||X||\n||F(X)||\n||LN(X)||\n050100150200\nL2 Norm||X||\n||F(X)||\n||LN(X)|| (c) OPT-13B' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='0.000.250.500.751.00Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 10 20 30 40\nTransformer Layer0.00.20.40.60.81.0Cosine SimilarityResidual At MLP\nCos(X, X+F(X))\nCos(X, F(X))0100200300\nNorm||X||\n||F(X)||\n||LN(X)||\n0100200300\nL2 Norm||X||\n||F(X)||\n||LN(X)||\n(d) OPT-30B\n0.25\n0.000.250.500.751.00Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 10 20 30 40 50 60\nTransformer Layer0.2\n0.00.20.40.60.81.0Cosine SimilarityResidual At MLP\nCos(X, X+F(X))' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='Cos(X, F(X))0100200300400\nNorm||X||\n||F(X)||\n||LN(X)||\n0100200300400\nL2 Norm||X||\n||F(X)||\n||LN(X)|| (e) OPT-66B\n0.50\n0.25\n0.000.250.500.751.00Cosine SimilarityResidual At Attention\nCos(X, X+F(X))\nCos(X, F(X))\n0 20 40 60 80\nTransformer Layer0.25\n0.000.250.500.751.00Cosine SimilarityResidual At MLP\nCos(X, X+F(X))\nCos(X, F(X))0500100015002000\nNorm||X||\n||F(X)||\n||LN(X)||\n05001000150020002500\nL2 Norm||X||\n||F(X)||\n||LN(X)|| (f) OPT-175B' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='Figure 10. Cosine similarity between XandF(X), and the cosine similarity between XandX′in orange color. L2norm of XandF(X)\nandXafter layer normalization in purple on the right. Except on the first layer, ∥X∥is significantly higher than ∥F(X)∥.∥F(X)∥is\nhigher at the first layer, which corresponds to the low cosine similarity at the first layer.\nFor example, in our exploration stage mentioned in Section 4.1, we adopt HNSW, a state-of-art near-neighbor search method,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='to predict MLP sparse pattern, and we can see from the following table there is no drop in the perplexity at 90 % sparsity\nratio. However, due to the high dimensionality of embedding and HNSW’s reliance on CPU, the time HNSW took to identify\nthe sparsity pattern is 10ms, which is longer than the MLP computation.\nIn our paper, we choose a neural network classifier as our near neighbor search method to take advantage of the fast matrix' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='multiplication on GPU. And training such classifiers to predict sparsity patterns is not only cheaper in terms of training cost\nbut also inherently different from the method concept.\n18' metadata={'source': 'pdfs/paper_3.pdf', 'page': 17} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80 96\nTransformer Layer0.50.60.70.80.91.0Union Contextual Sparsity\nBatch size\n2\n4\n8\n16\n32\n(a) MLP\n0 20 40 60 80 96\nTransformer Layer0.40.50.60.70.80.9Union Contextual Sparsity\nBatch size\n2\n4\n8\n16\n32 (b) Attention\nFigure 11. Union contextual sparsity with larger batch size.\nOPT-1.3B OPT-1.3B + HNSW\nHellaswag 0.4154 0.4314\nC4 14.2 14.4' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18} |
page_content='Table 8. Sparsify from the Depth: Skipping or parallel entire transformer blocks may not lead to catastrophic drop in accuracy at test time.\nModel COPA Hellaswag Lambada OpenBookQA PIQA Winogrande\nOPT-175B 0.8600 0.7814 0.7584 0.4460 0.8096 0.7261\n- Parallel 2 0.8300 0.7737 0.7762 0.4520 0.8030 0.7096\n- Parallel 4 0.5200 0.2519 0 0.2720 0.5092 0.4870\n- Skip 2/8 0.8000 0.7112 0.6387 0.4220 0.7840 0.6630\n- Skip 2/4 0.6900 0.4409 0.0240 0.3400 0.6882 0.5383' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18} |
page_content='Bloom 0.8000 0.7460 0.6771 0.4480 0.7949 0.7040\n- Parallel 2 0.8100 0.7404 0.6992 0.4360 0.7813 0.7048\n- Parallel 4 0.6200 0.3176 0.1325 0.2720 0.5593 0.5217\n- Skip 2/8 0.7900 0.6829 0.5936 0.4120 0.7699 0.6614\n- Skip 2/4 0.6600 0.5538 0.3023 0.3580 0.7046 0.5549\nC.3 Future Possibility: Skipping Layer\nDeja Vu currently sparsifies from the perspective of model width. Here, we explore the possibility of sparsification from' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18} |