-
Attention Is All You Need
Paper • 1706.03762 • Published • 39 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 13 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 12
Collections
Discover the best community collections!
Collections including paper arxiv:2406.11931
-
Bootstrapping Language Models with DPO Implicit Rewards
Paper • 2406.09760 • Published • 37 -
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Paper • 2406.11931 • Published • 54 -
Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs
Paper • 2406.14544 • Published • 33 -
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Paper • 2406.14491 • Published • 76
-
DataComp-LM: In search of the next generation of training sets for language models
Paper • 2406.11794 • Published • 39 -
Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Paper • 2406.10209 • Published • 8 -
Transformers Can Do Arithmetic with the Right Embeddings
Paper • 2405.17399 • Published • 50 -
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Paper • 2406.11931 • Published • 54
-
ShareGPT4Video: Improving Video Understanding and Generation with Better Captions
Paper • 2406.04325 • Published • 69 -
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs
Paper • 2406.11833 • Published • 61 -
Depth Anything V2
Paper • 2406.09414 • Published • 88 -
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Paper • 2406.14491 • Published • 76
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 75 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 51 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 29 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 56
-
MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation
Paper • 2404.11565 • Published • 12 -
Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Paper • 2406.06563 • Published • 17 -
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Paper • 2406.11931 • Published • 54