Victor Sanh PRO

VictorSanh

AI & ML interests

None yet

Articles

Organizations

Posts 5

view post
Post
1939
Can't wait to see multimodal LLama 3!

We released a resource that might come in handy: The Cauldron 🍯

The Cauldron is a massive manually-curated collection of 50 vision-language sets for instruction fine-tuning. 3.6M images, 30.3M query/answer pairs.

It covers a large variety of downstream uses: visual question answering on natural images, OCR, document/charts/figures/tables understanding, textbooks/academic question, reasoning, captioning, spotting differences between 2 images, and screenshot-to-code.

HuggingFaceM4/the_cauldron
view post
Post
2672
New open multimodal model in town: Idefics2!

πŸ’ͺ Strong 8B-parameters model: often on par with open 30B counterparts.
πŸ”“Open license: Apache 2.0.
πŸš€ Strong improvement over Idefics1: +12 points on VQAv2, +30 points on TextVQA while having 10x fewer parameters.
πŸ“š Better data: boosting OCR capabilities with 6TB of documents to transcribe, and improving QA capabilities on charts/figures/diagrams.
πŸ•΅οΈβ€β™€οΈ Transparent training data: inspect and build upon all the data (10s of TB of data) we trained on.
πŸ”² More natural image processing: Incorporating strategies to treat images in their native resolution and native aspect ratio.
πŸ“Έ High-resolution images: image resolutions up to 980 x 980 and integrating strategies that allow to trade computational efficiency for performance.
😎 2 checkpoints: Releasing both base checkpoint and instruction fine-tuned checkpoint. Chat version to come.

Ressources: HuggingFaceM4/idefics2-661d1971b7c50831dd3ce0fe
Blogpost: https://huggingface.co/blog/idefics2