70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Abstract
Large Language Models (LLMs) have grown rapidly in size, creating significant challenges for efficient deployment on resource-constrained hardware. In this paper, we introduce Dynamic-Length Float (DFloat11), a lossless compression framework that reduces LLM size by 30% while preserving outputs that are bit-for-bit identical to the original model. DFloat11 is motivated by the low entropy in the BFloat16 weight representation of LLMs, which reveals significant inefficiency in existing storage format. By applying entropy coding, DFloat11 assigns dynamic-length encodings to weights based on frequency, achieving near information-optimal compression without any loss of precision. To facilitate efficient inference with dynamic-length encodings, we develop a custom GPU kernel for fast online decompression. Our design incorporates the following: (i) decomposition of memory-intensive lookup tables (LUTs) into compact LUTs that fit in GPU SRAM, (ii) a two-phase kernel for coordinating thread read/write positions using lightweight auxiliary variables, and (iii) transformer-block-level decompression to minimize latency. Experiments on recent models, including Llama-3.1, Qwen-2.5, and Gemma-3, validates our hypothesis that DFloat11 achieves around 30% model size reduction while preserving bit-for-bit exact outputs. Compared to a potential alternative of offloading parts of an uncompressed model to the CPU to meet memory constraints, DFloat11 achieves 1.9-38.8x higher throughput in token generation. With a fixed GPU memory budget, DFloat11 enables 5.3-13.17x longer context lengths than uncompressed models. Notably, our method enables lossless inference of Llama-3.1-405B, an 810GB model, on a single node equipped with 8x80GB GPUs. Our code and models are available at https://github.com/LeanModels/DFloat11.
Community
๐ DFloat11 compresses LLMs by 30% with zero accuracy loss, enabling faster, longer, and more memory-efficient inference on GPUs.
We introduce DFloat11, a lossless compression framework that reduces the size of BFloat16-based LLMs by ~30% while keeping their outputs bit-for-bit identical. By exploiting the low entropy in BFloat16 weights, we apply Huffman-style dynamic-length encoding for near-optimal storage efficiency. To support fast inference, we design a custom GPU kernel that performs online decompression with minimal latency. Our method enables massive models like Llama-3.1-405B (810GB) to run entirely on a single 8ร80GB GPU node, without CPU offloading or any accuracy degradation. DFloat11 offers a practical drop-in solution for memory-efficient LLM inference.
- GitHub: https://github.com/LeanModels/DFloat11
- Hugging Face: https://huggingface.co/DFloat11
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reimagining Memory Access for LLM Inference: Compression-Aware Memory Controller Design (2025)
- TerEffic: Highly Efficient Ternary LLM Inference on FPGA (2025)
- PIPO: Pipelined Offloading for Efficient Inference on Consumer Devices (2025)
- SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs (2025)
- HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading (2025)
- Mind the Memory Gap: Unveiling GPU Bottlenecks in Large-Batch LLM Inference (2025)
- MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Quantization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper