Sparse Finetuning for Inference Acceleration of Large Language Models Paper • 2310.06927 • Published Oct 10, 2023 • 14
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression Paper • 2306.03078 • Published Jun 5, 2023 • 3
Extreme Compression of Large Language Models via Additive Quantization Paper • 2401.06118 • Published Jan 11 • 12
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization Paper • 2308.02060 • Published Aug 3, 2023 • 1
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression Paper • 2405.14852 • Published May 23 • 1
Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis Paper • 2412.01819 • Published 22 days ago • 31
Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis Paper • 2412.01819 • Published 22 days ago • 31
AQLM+PV Collection Official AQLM quantizations for "PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression": https://arxiv.org/abs/2405.14852 • 25 items • Updated 6 days ago • 19