-
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
Paper • 2312.09390 • Published • 32 -
OneLLM: One Framework to Align All Modalities with Language
Paper • 2312.03700 • Published • 20 -
Generative Multimodal Models are In-Context Learners
Paper • 2312.13286 • Published • 34 -
The LLM Surgeon
Paper • 2312.17244 • Published • 9
Collections
Discover the best community collections!
Collections including paper arxiv:2404.02258
-
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
Paper • 2402.10176 • Published • 34 -
Beyond Language Models: Byte Models are Digital World Simulators
Paper • 2402.19155 • Published • 49 -
Matryoshka Representation Learning
Paper • 2205.13147 • Published • 9 -
PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation
Paper • 2403.04692 • Published • 40
-
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 77 -
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Paper • 2309.09958 • Published • 18 -
Noise-Aware Training of Layout-Aware Language Models
Paper • 2404.00488 • Published • 7 -
Streaming Dense Video Captioning
Paper • 2404.01297 • Published • 11
-
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 75 -
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 104 -
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 124 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 251