Collections
Discover the best community collections!
Collections including paper arxiv:2402.03766
-
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Paper • 2403.09611 • Published • 119 -
Evolutionary Optimization of Model Merging Recipes
Paper • 2403.13187 • Published • 45 -
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
Paper • 2402.03766 • Published • 9 -
LLM Agent Operating System
Paper • 2403.16971 • Published • 62
-
VisionLLaMA: A Unified LLaMA Interface for Vision Tasks
Paper • 2403.00522 • Published • 40 -
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
Paper • 2402.03766 • Published • 9 -
MobileVLM : A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices
Paper • 2312.16886 • Published • 18 -
Lenna: Language Enhanced Reasoning Detection Assistant
Paper • 2312.02433 • Published • 2
-
Textbooks Are All You Need
Paper • 2306.11644 • Published • 138 -
LLaVA-φ: Efficient Multi-Modal Assistant with Small Language Model
Paper • 2401.02330 • Published • 11 -
Textbooks Are All You Need II: phi-1.5 technical report
Paper • 2309.05463 • Published • 84 -
Visual Instruction Tuning
Paper • 2304.08485 • Published • 8
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 135 -
ReFT: Reasoning with Reinforced Fine-Tuning
Paper • 2401.08967 • Published • 26 -
Tuning Language Models by Proxy
Paper • 2401.08565 • Published • 19 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 62
-
Extending Context Window of Large Language Models via Semantic Compression
Paper • 2312.09571 • Published • 12 -
LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Paper • 2311.05437 • Published • 40 -
LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models
Paper • 2312.02949 • Published • 8 -
TinyLLaVA: A Framework of Small-scale Large Multimodal Models
Paper • 2402.14289 • Published • 16