Papers
arxiv:2407.10960

Fast Matrix Multiplications for Lookup Table-Quantized LLMs

Published on Jul 15
· Submitted by radi-cho on Jul 22
Authors:
,

Abstract

The deployment of large language models (LLMs) is often constrained by memory bandwidth, where the primary bottleneck is the cost of transferring model parameters from the GPU's global memory to its registers. When coupled with custom kernels that fuse the dequantization and matmul operations, weight-only quantization can thus enable faster inference by reducing the amount of memory movement. However, developing high-performance kernels for weight-quantized LLMs presents substantial challenges, especially when the weights are compressed to non-evenly-divisible bit widths (e.g., 3 bits) with non-uniform, lookup table (LUT) quantization. This paper describes FLUTE, a flexible lookup table engine for LUT-quantized LLMs, which uses offline restructuring of the quantized weight matrix to minimize bit manipulations associated with unpacking, and vectorization and duplication of the lookup table to mitigate shared memory bandwidth constraints. At batch sizes < 32 and quantization group size of 128 (typical in LLM inference), the FLUTE kernel can be 2-4x faster than existing GEMM kernels. As an application of FLUTE, we explore a simple extension to lookup table-based NormalFloat quantization and apply it to quantize LLaMA3 to various configurations, obtaining competitive quantization performance against strong baselines while obtaining an end-to-end throughput increase of 1.5 to 2 times.

Community

Paper author Paper submitter

Introducing FLUTE, a CUDA kernel for non-uniformly quantized (via a lookup table) LLM Inference. It accelerates QLoRA's NormalFloat (NF) out of the box and more.

As an application, we extended NF4 and are releasing quantized models for LLaMA-3 (8B/70B) and Gemma-2 (9B/27B).

Highlights:

  • Support arbitrary mapping between (de)quantized tensors via a lookup table (INT4, FP4, NF4, etc).
  • Up to 2-3x faster than dense GEMM kernels.
  • 1.3x to 2.6x end-to-end latency improvement (vLLM).
  • Batch inference, 4/3-bits, and various group sizes.
  • vLLM integration and 10+ pre-quantized models off-the-shelf.

And, for those who care:

  • Almost entirely written in CUTLASS 3 (i.e., CuTe).
  • Uses TensorCore, Async Copy, and Stream-K.
  • Tailored to Ampere GPUs.

Paper: https://arxiv.org/abs/2407.10960
Code: https://github.com/HanGuo97/flute

@HanGuo97 @exists_forall @radi_cho @jrk @ericxing @yoonrkim

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.10960 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.10960 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.10960 in a Space README.md to link it from this page.

Collections including this paper 1