Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning Paper • 2502.06781 • Published 13 days ago • 59
view article Article Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel May 2, 2022 • 3
Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing Paper • 2502.04411 • Published 18 days ago • 4
view article Article Token Merging for fast LLM inference : Background and first trials with Mistral By samchain • Apr 30, 2024 • 4
Should We Really Edit Language Models? On the Evaluation of Edited Language Models Paper • 2410.18785 • Published Oct 24, 2024 • 6
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads Paper • 2410.10819 • Published Oct 14, 2024 • 7
LPZero: Language Model Zero-cost Proxy Search from Zero Paper • 2410.04808 • Published Oct 7, 2024 • 2
PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs Paper • 2410.05265 • Published Oct 7, 2024 • 30
view article Article LLM Data Engineering 3——Data Collection Magic: Acquiring Top Training Data By JessyTsu1 • Jun 4, 2024 • 4
🪐 SmolLM Collection A series of smol LLMs: 135M, 360M and 1.7B. We release base and Instruct models as well as the training corpus and some WebGPU demos • 12 items • Updated 4 days ago • 215
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models Paper • 2406.02924 • Published Jun 5, 2024 • 2
view article Article Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA May 24, 2023 • 119
Shortened LLaMA: A Simple Depth Pruning for Large Language Models Paper • 2402.02834 • Published Feb 5, 2024 • 16
Model Merging Collection Model Merging is a very popular technique nowadays in LLM. Here is a chronological list of papers on the space that will help you get started with it! • 30 items • Updated Jun 12, 2024 • 231