-
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 38 -
Qwen Technical Report
Paper • 2309.16609 • Published • 30 -
GPT-4 Technical Report
Paper • 2303.08774 • Published • 3 -
Gemini: A Family of Highly Capable Multimodal Models
Paper • 2312.11805 • Published • 44
Collections
Discover the best community collections!
Collections including paper arxiv:2401.04088
-
Multilingual Instruction Tuning With Just a Pinch of Multilinguality
Paper • 2401.01854 • Published • 9 -
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
Paper • 2401.01055 • Published • 51 -
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Paper • 2401.01325 • Published • 25 -
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 77
-
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
Paper • 2312.16862 • Published • 28 -
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action
Paper • 2312.17172 • Published • 26 -
Towards Truly Zero-shot Compositional Visual Reasoning with LLMs as Programmers
Paper • 2401.01974 • Published • 4 -
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Paper • 2401.01885 • Published • 26
-
Attention Is All You Need
Paper • 1706.03762 • Published • 39 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 12 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 12
-
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 56 -
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
Paper • 2312.12456 • Published • 40 -
Cached Transformers: Improving Transformers with Differentiable Memory Cache
Paper • 2312.12742 • Published • 11 -
Mini-GPTs: Efficient Large Language Models through Contextual Pruning
Paper • 2312.12682 • Published • 7
-
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
Paper • 2312.09390 • Published • 32 -
OneLLM: One Framework to Align All Modalities with Language
Paper • 2312.03700 • Published • 20 -
Generative Multimodal Models are In-Context Learners
Paper • 2312.13286 • Published • 32 -
The LLM Surgeon
Paper • 2312.17244 • Published • 9
-
ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Paper • 2312.10003 • Published • 32 -
Catwalk: A Unified Language Model Evaluation Framework for Many Datasets
Paper • 2312.10253 • Published • 7 -
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
Paper • 2401.04081 • Published • 68 -
Mixtral of Experts
Paper • 2401.04088 • Published • 154