Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
vaishaal commited on
Commit
0e1cc6c
1 Parent(s): 30a9d30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-to-image
5
+ - image-to-text
6
+ language:
7
+ - en
8
+ dataset_info:
9
+ features:
10
+ - name: url.txt
11
+ dtype: string
12
+ - name: txt
13
+ dtype: string
14
+ - name: json
15
+ struct:
16
+ - name: uid
17
+ dtype: string
18
+ - name: sha256
19
+ dtype: string
20
+ - name: original_height
21
+ dtype: int
22
+ - name: original_width
23
+ dtype: int
24
+ ---
25
+
26
+ # Dataset Card for DataComp-12M
27
+
28
+ <!-- Provide a quick summary of the dataset. -->
29
+
30
+ This dataset contains a 12M subset of [DataComp-1B-BestPool](https://huggingface.co/datasets/mlfoundations/datacomp_1b).
31
+ We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
32
+ Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium.
33
+
34
+ DataComp-12M was introduced in [MobileCLIP paper](https://arxiv.org/abs/2311.17049) and along with the reinforced dataset [DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M).
35
+ The UIDs per shards match between [mlfoundations/DataComp-12M](https://huggingface.co/datasets/mlfoundations/DataComp-12M) and [apple/DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M)
36
+
37
+ ## Terms and Conditions
38
+
39
+ We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
40
+
41
+
42
+ ## Citation
43
+
44
+ **[Datacomp: In search of the next generation of multimodal datasets]. (NeurIPS 2024)**
45
+ Gadre, Samir Yitzhak, et al.
46
+ ```
47
+ @article{gadre2024datacomp,
48
+ title={Datacomp: In search of the next generation of multimodal datasets},
49
+ author={Gadre, Samir Yitzhak and Ilharco, Gabriel and Fang, Alex and Hayase, Jonathan and Smyrnis, Georgios and Nguyen, Thao and Marten, Ryan and Wortsman, Mitchell and Ghosh, Dhruba and Zhang, Jieyu and others},
50
+ journal={Advances in Neural Information Processing Systems},
51
+ volume={36},
52
+ year={2024}
53
+ }
54
+ ```
55
+
56
+ **[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)**
57
+ *Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.*
58
+
59
+ ```bibtex
60
+ @InProceedings{mobileclip2024,
61
+ author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
62
+ title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
63
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
64
+ month = {June},
65
+ year = {2024},
66
+ }
67
+ ```