Papers
arxiv:2309.16671

Demystifying CLIP Data

Published on Sep 28, 2023
· Featured in Daily Papers on Sep 29, 2023
Authors:
,
,
,
,
,
,
,

Abstract

Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining the same training budget, attains 72.4%. Our observations hold across various model sizes, exemplified by ViT-H achieving 80.5%, without any bells-and-whistles. Curation code and training data distribution on metadata is made available at https://github.com/facebookresearch/MetaCLIP.

Community

Here is a ML-generated summary

Objective
The paper aims to reveal CLIP's data curation approach and present a transparent algorithm called MetaCLIP to curate high-quality image-text data from raw web data.

Insights

  • Metadata plays a central role in mitigating noise and preserving signal.
  • Balancing the distribution is key to maximizing diversity and task-agnostic properties.
  • Sub-string matching acts as an implicit filter to remove noise without manual rules.
  • Curation algorithm enables easy adaptation to new data sources without external filters.
  • MetaCLIP outperforms CLIP's data, showing the effectiveness of the curation approach.

Results
MetaCLIP applied to CommonCrawl with 400M data points outperforms CLIP's WIT400M dataset on multiple benchmarks, achieving 70.8% top-1 accuracy on ImageNet zero-shot classification with ViT-B/16, compared to CLIP's 68.3%.

00350-751814143.jpeg

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 7

Browse 7 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.16671 in a dataset README.md to link it from this page.

Spaces citing this paper 8

Collections including this paper 7