-
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Paper • 2306.17107 • Published • 11 -
On the Hidden Mystery of OCR in Large Multimodal Models
Paper • 2305.07895 • Published -
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Paper • 2308.12966 • Published • 7 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 49
Collections
Discover the best community collections!
Collections including paper arxiv:2408.16357
-
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
Paper • 2312.16862 • Published • 30 -
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action
Paper • 2312.17172 • Published • 27 -
Towards Truly Zero-shot Compositional Visual Reasoning with LLMs as Programmers
Paper • 2401.01974 • Published • 5 -
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Paper • 2401.01885 • Published • 27
-
To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning
Paper • 2311.07574 • Published • 14 -
CSGO: Content-Style Composition in Text-to-Image Generation
Paper • 2408.16766 • Published • 17 -
Law of Vision Representation in MLLMs
Paper • 2408.16357 • Published • 92 -
CogVLM2: Visual Language Models for Image and Video Understanding
Paper • 2408.16500 • Published • 56