Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models Paper • 2404.07973 • Published 26 days ago • 28
Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs Paper • 2404.05719 • Published 29 days ago • 55
GLIPv2: Unifying Localization and Vision-Language Understanding Paper • 2206.05836 • Published Jun 12, 2022 • 1
How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts Paper • 2402.13220 • Published Feb 20 • 12
From Scarcity to Efficiency: Improving CLIP Training via Visual-enriched Captions Paper • 2310.07699 • Published Oct 11, 2023 • 2
Ferret: Refer and Ground Anything Anywhere at Any Granularity Paper • 2310.07704 • Published Oct 11, 2023 • 9
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training Paper • 2403.09611 • Published Mar 14 • 119
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action Paper • 2312.17172 • Published Dec 28, 2023 • 24
Aligning Large Multimodal Models with Factually Augmented RLHF Paper • 2309.14525 • Published Sep 25, 2023 • 29