Collections
Discover the best community collections!
Collections including paper arxiv:2404.14619
-
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 239 -
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 122 -
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations
Paper • 2402.07023 • Published • 3 -
NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Paper • 2405.01481 • Published • 20
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 80 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 13 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 24 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 21
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 76 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 50 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 28 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 55
-
Attention Is All You Need
Paper • 1706.03762 • Published • 36 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 11 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 10
-
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 62 -
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
Paper • 2404.07143 • Published • 93 -
Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies
Paper • 2404.08197 • Published • 26 -
Pre-training Small Base LMs with Fewer Tokens
Paper • 2404.08634 • Published • 32
-
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 99 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 31 -
ViTAR: Vision Transformer with Any Resolution
Paper • 2403.18361 • Published • 48 -
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper • 2403.18814 • Published • 40