Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance Paper • 2402.08680 • Published Feb 13 • 1
How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts Paper • 2402.13220 • Published Feb 20 • 12
FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback Paper • 2404.05046 • Published Apr 7