Papers
arxiv:2208.07339

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Published on Aug 15, 2022
Authors:
,

Abstract

Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, LLM.int8(). We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9% of values are multiplied in 8-bit. Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software.

Community

Introduces the Int8 quantization procedure/pipeline for LLMs (without performance degradation): LLM.int8() has vector-wise quantization with separate normalization constraints and 16-but matrix multiplication for outliers. Reviews Absmax and zero-point quantisation. Proposes mixed precision decomposition for mixing 8-bit and 16-bit terms for inputs and weights/matrix multiplication terms (normalization and denormalization). Tests performance degradation of OPT language models on the C4 (common crawl) corpus. Emergent large-magnitude features are outliers; detection through empirical test; emerge around 6.7B parameters, emerges smoothly across all layers, median magnitude increases once they occur in all layers, and scale with perplexity (they also have asymmetric distribution). GLM-130B is related (16-bit ops and 8-bit storage). Ran OPT-175B and 66B on RTX 3090s (8 and 4 respectively). Appendix has FAQs, memory usage comparisons, benchmarking, and even training (Int8 training of attention degraded performance). From University of Washington, Meta, HuggingFace.

Links: PapersWithCode, HuggingFace Blog, HF Transformers, GitHub

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2208.07339 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 10