💬🔥Releasing idefics2-8b-chatty, the chat-optimized version of Idefics2!
It is a very efficient (8B parameters) state-of-the-art VLM, has been red-teamed, and comes with a few surprises: - 📖Paper dissecting a lot of the experimental insights we learned building Idefics2: - 🏎️TGI integration for blazing-fast inference (you can already run it locally with < 24GB GPU memory) - 🏆 Ranking 2nd in its category (< 10B, open weights) in the awesome Open VLM Leaderboard, and now appearing in the incredible Vision Arena