Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
vladbogo 
posted an update Mar 9
Post
A recent paper titled "Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters" proposes using fine-tuned Multimodal Language Models (MLMs) as high-quality filters for image-text data.

Key points:
* Defines multiple metrics to assess image-text quality from different perspectives like object details, text quality, and semantic understanding.
* Leverages GPT-4 and GPT-4V to construct high-quality instruction data for fine-tuning open-source MLMs as effective data filters.
* Fine-tuned MLM filters generate more precise scores, leading to better filtered data and improved performance of pre-trained models on various downstream tasks.

Congrats to the authors for their work!

Paper: Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters (2403.02677)
Code: https://github.com/Victorwz/MLM_Filter
Dataset: weizhiwang/mlm_filter_instructions
Model: weizhiwang/mlm-filter-llava-13b-gpt4v
In this post