Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens Paper • 2411.17691 • Published Nov 26, 2024 • 10 • 5
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens Paper • 2411.17691 • Published Nov 26, 2024 • 10
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated 14 days ago • 111
Balancing Continuous Pre-Training and Instruction Fine-Tuning: Optimizing Instruction-Following in LLMs Paper • 2410.10739 • Published Oct 14, 2024 • 2
Balancing Continuous Pre-Training and Instruction Fine-Tuning: Optimizing Instruction-Following in LLMs Paper • 2410.10739 • Published Oct 14, 2024 • 2 • 1
Instruction Following without Instruction Tuning Paper • 2409.14254 • Published Sep 21, 2024 • 27 • 4
A Comprehensive Evaluation of Quantized Instruction-Tuned Large Language Models: An Experimental Analysis up to 405B Paper • 2409.11055 • Published Sep 17, 2024 • 16 • 3
Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems Paper • 2404.01616 • Published Apr 2, 2024 • 2
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference Paper • 2407.14057 • Published Jul 19, 2024 • 45 • 3
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference Paper • 2407.14057 • Published Jul 19, 2024 • 45
In-Context Editing: Learning Knowledge from Self-Induced Distributions Paper • 2406.11194 • Published Jun 17, 2024 • 15 • 5