|
--- |
|
license: other |
|
license_name: custom-apple-license |
|
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE |
|
dataset_info: |
|
features: |
|
- name: url.txt |
|
dtype: string |
|
- name: syn.json |
|
struct: |
|
- name: syn_text |
|
list: |
|
dtype: string |
|
- name: paug.json |
|
struct: |
|
- name: param_aug |
|
dtype: string |
|
- name: npz |
|
struct: |
|
- name: image_emb |
|
list: |
|
list: float32 |
|
- name: text_emb |
|
list: |
|
list: float32 |
|
- name: json |
|
struct: |
|
- name: uid |
|
dtype: string |
|
- name: sha256 |
|
dtype: string |
|
task_categories: |
|
- text-to-image |
|
- image-to-text |
|
language: |
|
- en |
|
--- |
|
|
|
# Dataset Card for DataCompDR-1B |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
This dataset contains synthetic captions, embeddings, and metadata for DataCompDR-1B. |
|
The metadata has been generated using pretrained image-text models on [DataComp-1B](https://huggingface.co/datasets/mlfoundations/datacomp_1b). |
|
For details on how to use the metadata, please visit our [github repository](https://github.com/apple/ml-mobileclip). |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
DataCompDR is an image-text dataset and an enhancement to the DataComp dataset. |
|
We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy. |
|
In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M. |
|
We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations. |
|
We generate 5 synthetic captions per image using the `coca_ViT-L-14` model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M). |
|
We compute embeddings of an ensemble of two strong teachers (`ViT-L-14` with pretrained weights `datacomp_xl_s13b_b90k` and openai in OpenCLIP) on augmented images as well as real and synthetic captions. |
|
Embeddings are 1536-D concatenations of 2x768-D vectors. |
|
One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption. |
|
|
|
- **Curated by:** Original data by [DataComp](https://www.datacomp.ai/) and metadata by Apple. |
|
- **License:** We distribute our metadata under our [license](https://github.com/apple/ml-mobileclip/blob/main/LICENSE). The original image url-text samples and metadata were released by [DataComp](https://www.datacomp.ai/) under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. |
|
- **Repository:** [ml-mobileclip GitHub](https://github.com/apple/ml-mobileclip) |
|
- **Paper:** [MobileCLIP paper](https://arxiv.org/abs/2311.17049) |
|
- **Demo:** Coming Soon |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training. |
|
For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M. |
|
Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works. |
|
Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
``` |
|
- <uid>.url.txt: Image URL (string) |
|
- <uid>.syn.json: |
|
- syn_text: List of synthetic captions (list[string]) |
|
- <uid>.paug.json: |
|
- param_aug: List of augmentation parameters (list[list[Union[int,float]]]) |
|
- <uid>.npz |
|
- image_emb: List of image embeddings for multiple image augmentations (list[list[float]]) |
|
- text_emb: List of text embeddings for ground-truth/synthetic captions (list[list[float]]) |
|
- <uid>.json |
|
- uid: UID of image-text sample in DataComp (string) |
|
- sha256: SHA256 hash of the image (string) |
|
``` |
|
|
|
|
|
## Citation |
|
|
|
**[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)** |
|
*Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.* |
|
|
|
```bibtex |
|
@InProceedings{mobileclip2024, |
|
author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel}, |
|
title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training}, |
|
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
month = {June}, |
|
year = {2024}, |
|
} |
|
``` |