I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders Paper • 2503.18878 • Published 14 days ago • 112
When Less is Enough: Adaptive Token Reduction for Efficient Image Representation Paper • 2503.16660 • Published 18 days ago • 71
One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation Paper • 2503.13358 • Published 21 days ago • 93
RWKV-7 "Goose" with Expressive Dynamic State Evolution Paper • 2503.14456 • Published 20 days ago • 136
EuroBERT: Scaling Multilingual Encoders for European Languages Paper • 2503.05500 • Published Mar 7 • 76
SynthDetoxM Collection Data and models from NAACL 2025 paper "SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators" by Moskovskiy et al. • 4 items • Updated Mar 6 • 2
When an LLM is apprehensive about its answers -- and when its uncertainty is justified Paper • 2503.01688 • Published Mar 3 • 20
GHOST 2.0: generative high-fidelity one shot transfer of heads Paper • 2502.18417 • Published Feb 25 • 66
LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context Memory of Transformers Paper • 2502.15007 • Published Feb 20 • 171
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? Paper • 2502.14502 • Published Feb 20 • 89
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity Paper • 2502.13063 • Published Feb 18 • 69
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators Paper • 2502.06394 • Published Feb 10 • 90