Collections
Discover the best community collections!
Collections including paper arxiv:2404.03411
-
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Paper • 1705.04146 • Published • 1 -
Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?
Paper • 2404.03411 • Published • 8 -
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Paper • 2404.04125 • Published • 27 -
Hydragen: High-Throughput LLM Inference with Shared Prefixes
Paper • 2402.05099 • Published • 17
-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 50 -
Beyond Language Models: Byte Models are Digital World Simulators
Paper • 2402.19155 • Published • 46 -
StarCoder 2 and The Stack v2: The Next Generation
Paper • 2402.19173 • Published • 126 -
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 18
-
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 71 -
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Paper • 2309.09958 • Published • 18 -
Noise-Aware Training of Layout-Aware Language Models
Paper • 2404.00488 • Published • 6 -
Streaming Dense Video Captioning
Paper • 2404.01297 • Published • 10