Papers
arxiv:2410.02367

SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration

Published on Oct 3
· Submitted by akhaliq on Oct 4
#2 Paper of the day
Authors:
,
,
,

Abstract

The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation.

Community

Paper submitter
Paper author

Two points regarding speed need to be emphasized:

  1. Based on the NVIDIA whitepaper[1], the FLOPS for FP8 Matmul is 330 TFLOPS, whereas it reaches 660 TFLOPS for INT8.

  2. Also, as detailed in [1], utilizing a FP16 accumulator for FP16 Matmul achieves 330 TFLOPS, doubling the speed of using a FP32 accumulator.

[1] https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf

·

I think you made a typo on page 3 in "3.1 FLASHATTENTION". Specifically
"proposes to tile Q, K, and V from the token dimension into blocks {Qi}, {Ki}, {Vi} with block size of b_q, b_kv, b_kv, respectively." Isn't it b_q, b_k, b_v instead of b_q, b_kv, b_kv?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.02367 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.02367 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.02367 in a Space README.md to link it from this page.

Collections including this paper 4