Collections
Discover the best community collections!
Collections including paper arxiv:2402.17764
-
Visual In-Context Prompting
Paper • 2311.13601 • Published • 14 -
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework
Paper • 2308.08155 • Published • 2 -
LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models
Paper • 2303.02927 • Published • 3 -
The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4
Paper • 2311.07361 • Published • 11
-
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Paper • 2311.07689 • Published • 7 -
DiLoCo: Distributed Low-Communication Training of Language Models
Paper • 2311.08105 • Published • 13 -
SparQ Attention: Bandwidth-Efficient LLM Inference
Paper • 2312.04985 • Published • 35 -
Aligning Large Language Models with Counterfactual DPO
Paper • 2401.09566 • Published • 2
-
Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model
Paper • 2310.09520 • Published • 10 -
When can transformers reason with abstract symbols?
Paper • 2310.09753 • Published • 2 -
Improving Large Language Model Fine-tuning for Solving Math Problems
Paper • 2310.10047 • Published • 5 -
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing
Paper • 2311.00571 • Published • 39
-
Large Language Models for Compiler Optimization
Paper • 2309.07062 • Published • 22 -
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper • 2310.17157 • Published • 8 -
FP8-LM: Training FP8 Large Language Models
Paper • 2310.18313 • Published • 30 -
Atom: Low-bit Quantization for Efficient and Accurate LLM Serving
Paper • 2310.19102 • Published • 7
-
Kosmos-2.5: A Multimodal Literate Model
Paper • 2309.11419 • Published • 49 -
Nougat: Neural Optical Understanding for Academic Documents
Paper • 2308.13418 • Published • 33 -
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models
Paper • 2310.08491 • Published • 50 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 571
-
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 16 -
LLM Augmented LLMs: Expanding Capabilities through Composition
Paper • 2401.02412 • Published • 35 -
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Paper • 2401.06066 • Published • 36 -
Tuning Language Models by Proxy
Paper • 2401.08565 • Published • 19
-
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper • 2309.11495 • Published • 37 -
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 72 -
CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages
Paper • 2309.09400 • Published • 77 -
Language Modeling Is Compression
Paper • 2309.10668 • Published • 81