|
--- |
|
license: apache-2.0 |
|
--- |
|
<br> |
|
<br> |
|
|
|
# MLM-Filter-13b Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
MLM-Filter-13b is an open-source MLLM trained to assess the data quality of image-text paired data. It can generate 4 quality metrics for image-text data: Image Text Matching, Object Detail Fulfillment, Caption Text Quality, and Semantic Understanding. |
|
|
|
**Model date:** |
|
MLM-Filter-13B was trained in Dec 2023. |
|
|
|
**Paper or resources for more information:** |
|
https://mlm-filter.github.io/ |
|
|
|
## License |
|
Llama 2 is licensed under the LLAMA 2 Community License, |
|
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/Victorwz/MLM_Filter/issues |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
MLM-Filter can be used as a drop-in replacement for CLIPScore in these tasks: |
|
|
|
1. Score image-text data in large-scale pre-training dataset and then filter high-quality subsets based on the scores (For training MLLMs or VLMs, please consider to jointly use the Image-Text Matching score and the Object Detail Fulfillment score); |
|
|
|
2. Evaluate the image-text alignment for image2text or text2image generation models; |
|
|
|
3. Any potential applications with the need to calculate the image-text alignment. |
|
|
|
|
|
## Training dataset |
|
- 46k instruction sampled from LLaVA-1.5 665k data. |
|
- 4k instructions on image-text data quality assessment tasks ranging across 4 metrics. |
|
|
|
## Usage Sample |
|
Please follow the instructions in https://github.com/Victorwz/MLM_Filter. |