thaottn commited on
Commit
4ea6afc
1 Parent(s): fd65d02

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md CHANGED
@@ -1,3 +1,105 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - image-to-text
5
+ - zero-shot-classification
6
+ size_categories:
7
+ - 1B<n<10B
8
  ---
9
+ # Dataset Card for DataComp_large_pool_BLIP2_captions
10
+
11
+ ## Dataset Description
12
+
13
+ - **Paper: https://arxiv.org/abs/2307.10350**
14
+ - **Leaderboard: https://www.datacomp.ai/leaderboard.html**
15
+ - **Point of Contact: Thao Nguyen (thaottn@cs.washington.edu)**
16
+
17
+ ### Dataset Summary
18
+
19
+
20
+
21
+ ### Supported Tasks and Leaderboards
22
+
23
+ We have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp.
24
+ Refer to the DataComp leaderboard (https://www.datacomp.ai/leaderboard.html) for the top baselines uncovered in our work.
25
+
26
+ ### Languages
27
+
28
+ Primarily English.
29
+
30
+ ## Dataset Structure
31
+
32
+ ### Data Instances
33
+
34
+ Each instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with temperature 0.75.
35
+
36
+ ### Data Fields
37
+
38
+ uid: SHA256 hash of image, provided as metadata by the DataComp team.
39
+
40
+ blip2-cap: corresponding caption generated by BLIP2.
41
+
42
+ ### Data Splits
43
+
44
+ Data was not split. The dataset is intended for pre-training multimodal models.
45
+
46
+ ## Dataset Creation
47
+
48
+ ### Curation Rationale
49
+
50
+ Web-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set.
51
+ To address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images.
52
+ Our work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training.
53
+
54
+ ### Source Data
55
+
56
+ #### Initial Data Collection and Normalization
57
+
58
+ The original 1.28M image-text pairs were collected by the DataComp team from Common Crawl. Minimal filtering was performed on the initial data pool (face blurring, NSFW removal, train-test deduplication).
59
+ We then replaced the original web-crawled captions with synthetic captions generated by BLIP2.
60
+
61
+ #### Who are the source language producers?
62
+
63
+ Common Crawl is the source for images. BLIP2 is the source of the text data.
64
+
65
+ ### Annotations
66
+
67
+ #### Annotation process
68
+
69
+ The dataset was built in a fully automated process: captions are generated by the BLIP2 captioning model.
70
+
71
+ #### Who are the annotators?
72
+
73
+ No human annotators are involved.
74
+
75
+ ### Personal and Sensitive Information
76
+
77
+ The images, which we inherit from the DataComp benchmark, already underwent face detection and face blurring. While the DataComp team made an attempt to remove NSFW instances, it is possible that such content may still exist (to a small degree) in this dataset.
78
+ Due to the large scale nature of this dataset, the content has not been manually verified to be completely safe. Therefore, it is strongly recommended that this dataset be used only for research purposes.
79
+
80
+ ## Considerations for Using the Data
81
+
82
+ ### Social Impact of Dataset
83
+
84
+ The publication contains some preliminary analyses of the fairness implication of training on this dataset, when evaluating on Fairface.
85
+
86
+ ### Discussion of Biases
87
+
88
+ Refer to the publication for more details.
89
+
90
+ ### Other Known Limitations
91
+
92
+ Refer to the publication for more details.
93
+
94
+ ## Additional Information
95
+
96
+ ### Citation Information
97
+
98
+ ```bibtex
99
+ @article{nguyen2023improving,
100
+ title={Improving Multimodal Datasets with Image Captioning},
101
+ author={Nguyen, Thao and Gadre, Samir Yitzhak and Ilharco, Gabriel and Oh, Sewoong and Schmidt, Ludwig},
102
+ journal={arXiv preprint arXiv:2307.10350},
103
+ year={2023}
104
+ }
105
+ ```