ViP-LLaVA Collection ViP-LLaVA is a novel approach to allow large multimodal models understand arbitrary visual prompts. • 3 items • Updated Mar 25 • 2
LLaVa-NeXT Collection LLaVa-NeXT (also known as LLaVa-1.6) improves upon the 1.5 series by incorporating higher image resolutions and more reasoning/OCR datasets. • 8 items • Updated Jul 19 • 27
LLaVa-1.5 Collection LLaVa-1.5 is a series of vision-language models (VLMs) trained on a variety of visual instruction datasets. • 3 items • Updated Mar 18 • 7