fartashf commited on
Commit
3cf55a7
1 Parent(s): 7c2f4d7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: custom-apple-license
4
+ license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE
5
+ dataset_info:
6
+ features:
7
+ - name: url.txt
8
+ dtype: string
9
+ - name: syn.json
10
+ struct:
11
+ - name: syn_text
12
+ list:
13
+ dtype: string
14
+ - name: paug.json
15
+ struct:
16
+ - name: param_aug
17
+ dtype: string
18
+ - name: npz
19
+ struct:
20
+ - name: image_emb
21
+ list:
22
+ list: float32
23
+ - name: text_emb
24
+ list:
25
+ list: float32
26
+ - name: json
27
+ struct:
28
+ - name: uid
29
+ dtype: string
30
+ - name: sha256
31
+ dtype: string
32
+ task_categories:
33
+ - text-to-image
34
+ - image-to-text
35
+ language:
36
+ - en
37
+ ---
38
+
39
+ # Dataset Card for DataCompDR-1B
40
+
41
+ <!-- Provide a quick summary of the dataset. -->
42
+
43
+ This dataset contains synthetic captions, embeddings, and metadata for DataCompDR-1B.
44
+ The metadata has been generated using pretrained image-text models on [DataComp-1B](https://huggingface.co/datasets/mlfoundations/datacomp_1b).
45
+ For details on how to use the metadata, please visit our [github repository](https://github.com/apple/ml-mobileclip).
46
+
47
+ ## Dataset Details
48
+
49
+ ### Dataset Description
50
+
51
+ <!-- Provide a longer summary of what this dataset is. -->
52
+
53
+ DataCompDR is an image-text dataset and an enhancement to the DataComp dataset.
54
+ We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy.
55
+ In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M.
56
+ We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations.
57
+ We generate 5 synthetic captions per image using the `coca_ViT-L-14` model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M).
58
+ We compute embeddings of an ensemble of two strong teachers (`ViT-L-14` with pretrained weights `datacomp_xl_s13b_b90k` and openai in OpenCLIP) on augmented images as well as real and synthetic captions.
59
+ Embeddings are 1536-D concatenations of 2x768-D vectors.
60
+ One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption.
61
+
62
+ - **Curated by:** Original data by [DataComp](https://www.datacomp.ai/) and metadata by Apple.
63
+ - **License:** We distribute our metadata under our [license](https://github.com/apple/ml-mobileclip/blob/main/LICENSE). The original image url-text samples and metadata were released by [DataComp](https://www.datacomp.ai/) under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
64
+ - **Repository:** [ml-mobileclip GitHub](https://github.com/apple/ml-mobileclip)
65
+ - **Paper:** [MobileCLIP paper](https://arxiv.org/abs/2311.17049)
66
+ - **Demo:** Coming Soon
67
+
68
+ ## Uses
69
+
70
+ <!-- Address questions around how the dataset is intended to be used. -->
71
+
72
+ Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training.
73
+ For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M.
74
+ Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works.
75
+ Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp.
76
+
77
+ ## Dataset Structure
78
+
79
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
80
+
81
+ ```
82
+ - <uid>.url.txt: Image URL (string)
83
+ - <uid>.syn.json:
84
+ - syn_text: List of synthetic captions (list[string])
85
+ - <uid>.paug.json:
86
+ - param_aug: List of augmentation parameters (list[list[Union[int,float]]])
87
+ - <uid>.npz
88
+ - image_emb: List of image embeddings for multiple image augmentations (list[list[float]])
89
+ - text_emb: List of text embeddings for ground-truth/synthetic captions (list[list[float]])
90
+ - <uid>.json
91
+ - uid: UID of image-text sample in DataComp (string)
92
+ - sha256: SHA256 hash of the image (string)
93
+ ```
94
+
95
+
96
+ ## Citation
97
+
98
+ **[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)**
99
+ *Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.*
100
+
101
+ ```bibtex
102
+ @InProceedings{mobileclip2024,
103
+ author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
104
+ title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
105
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
106
+ month = {June},
107
+ year = {2024},
108
+ }
109
+ ```