Datasets:

Modalities:
Image
Text
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
DataCompDR-1B / README.md
fartashf's picture
Create README.md
3cf55a7 verified
|
raw
history blame
4.89 kB
metadata
license: other
license_name: custom-apple-license
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE
dataset_info:
  features:
    - name: url.txt
      dtype: string
    - name: syn.json
      struct:
        - name: syn_text
          list:
            dtype: string
    - name: paug.json
      struct:
        - name: param_aug
          dtype: string
    - name: npz
      struct:
        - name: image_emb
          list:
            list: float32
        - name: text_emb
          list:
            list: float32
    - name: json
      struct:
        - name: uid
          dtype: string
        - name: sha256
          dtype: string
task_categories:
  - text-to-image
  - image-to-text
language:
  - en

Dataset Card for DataCompDR-1B

This dataset contains synthetic captions, embeddings, and metadata for DataCompDR-1B. The metadata has been generated using pretrained image-text models on DataComp-1B. For details on how to use the metadata, please visit our github repository.

Dataset Details

Dataset Description

DataCompDR is an image-text dataset and an enhancement to the DataComp dataset. We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy. In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M. We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations. We generate 5 synthetic captions per image using the coca_ViT-L-14 model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M). We compute embeddings of an ensemble of two strong teachers (ViT-L-14 with pretrained weights datacomp_xl_s13b_b90k and openai in OpenCLIP) on augmented images as well as real and synthetic captions. Embeddings are 1536-D concatenations of 2x768-D vectors. One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption.

  • Curated by: Original data by DataComp and metadata by Apple.
  • License: We distribute our metadata under our license. The original image url-text samples and metadata were released by DataComp under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
  • Repository: ml-mobileclip GitHub
  • Paper: MobileCLIP paper
  • Demo: Coming Soon

Uses

Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training. For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M. Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works. Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp.

Dataset Structure

- <uid>.url.txt: Image URL (string)
- <uid>.syn.json:
  - syn_text: List of synthetic captions (list[string])
- <uid>.paug.json:
  - param_aug: List of augmentation parameters (list[list[Union[int,float]]])
- <uid>.npz
  - image_emb: List of image embeddings for multiple image augmentations (list[list[float]])
  - text_emb: List of text embeddings for ground-truth/synthetic captions (list[list[float]])
- <uid>.json
  - uid: UID of image-text sample in DataComp (string)
  - sha256: SHA256 hash of the image (string)

Citation

MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training. (CVPR 2024) Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.

@InProceedings{mobileclip2024,
  author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
  title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2024},
}