Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
vaishaal commited on
Commit
4beb87b
1 Parent(s): 0e1cc6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -18,9 +18,9 @@ dataset_info:
18
  - name: sha256
19
  dtype: string
20
  - name: original_height
21
- dtype: int
22
  - name: original_width
23
- dtype: int
24
  ---
25
 
26
  # Dataset Card for DataComp-12M
@@ -32,7 +32,7 @@ We distribute the image url-text samples and metadata under a standard Creative
32
  Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium.
33
 
34
  DataComp-12M was introduced in [MobileCLIP paper](https://arxiv.org/abs/2311.17049) and along with the reinforced dataset [DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M).
35
- The UIDs per shards match between [mlfoundations/DataComp-12M](https://huggingface.co/datasets/mlfoundations/DataComp-12M) and [apple/DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M)
36
 
37
  ## Terms and Conditions
38
 
@@ -41,7 +41,7 @@ We have terms of service that are similar to those adopted by HuggingFace (https
41
 
42
  ## Citation
43
 
44
- **[Datacomp: In search of the next generation of multimodal datasets]. (NeurIPS 2024)**
45
  Gadre, Samir Yitzhak, et al.
46
  ```
47
  @article{gadre2024datacomp,
 
18
  - name: sha256
19
  dtype: string
20
  - name: original_height
21
+ dtype: int32
22
  - name: original_width
23
+ dtype: int32
24
  ---
25
 
26
  # Dataset Card for DataComp-12M
 
32
  Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium.
33
 
34
  DataComp-12M was introduced in [MobileCLIP paper](https://arxiv.org/abs/2311.17049) and along with the reinforced dataset [DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M).
35
+ The UIDs per shards match between [mlfoundations/DataComp-12M](https://huggingface.co/datasets/mlfoundations/DataComp-12M) and [apple/DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M).
36
 
37
  ## Terms and Conditions
38
 
 
41
 
42
  ## Citation
43
 
44
+ **[DataComp: In search of the next generation of multimodal datasets](https://arxiv.org/abs/2304.14108). (NeurIPS 2024)**
45
  Gadre, Samir Yitzhak, et al.
46
  ```
47
  @article{gadre2024datacomp,