xiaohk commited on
Commit
a9dd349
1 Parent(s): 65a4196

Update reademe

Browse files
Files changed (1) hide show
  1. README.md +95 -37
README.md CHANGED
@@ -33,7 +33,7 @@ task_ids:
33
 
34
  # DiffusionDB
35
 
36
- <img width="100%" src="https://user-images.githubusercontent.com/15007159/198505835-bcc3a34f-a782-4064-989b-135e32b577a7.gif">
37
 
38
  ## Table of Contents
39
 
@@ -43,9 +43,13 @@ task_ids:
43
  - [Dataset Summary](#dataset-summary)
44
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
45
  - [Languages](#languages)
 
 
46
  - [Dataset Structure](#dataset-structure)
47
  - [Data Instances](#data-instances)
48
  - [Data Fields](#data-fields)
 
 
49
  - [Data Splits](#data-splits)
50
  - [Loading Data Subsets](#loading-data-subsets)
51
  - [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
@@ -85,7 +89,7 @@ task_ids:
85
 
86
  ### Dataset Summary
87
 
88
- DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
89
 
90
  DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
91
 
@@ -97,30 +101,71 @@ The unprecedented scale and diversity of this human-actuated dataset provide exc
97
 
98
  The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  ## Dataset Structure
101
 
102
- We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
103
 
104
  ```bash
 
105
  ./
106
  ├── images
107
  │   ├── part-000001
108
  │   │   ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
109
  │   │   ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
110
  │   │   ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
111
- │   │   ├── 99c36256-2c20-40ac-8e83-8369e9a28f32.png
112
- │   │   ├── f3501e05-aef7-4225-a9e9-f516527408ac.png
113
  │   │   ├── [...]
114
  │   │   └── part-000001.json
115
  │   ├── part-000002
116
  │   ├── part-000003
117
- │   ├── part-000004
118
  │   ├── [...]
119
  │   └── part-002000
120
  └── metadata.parquet
121
  ```
122
 
123
- These sub-folders have names `part-00xxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a PNG file. The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
  ### Data Instances
126
 
@@ -149,38 +194,50 @@ For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` an
149
  - `st`: Steps
150
  - `sa`: Sampler
151
 
152
- At the top level folder of DiffusionDB, we include a metadata table in Parquet format `metadata.parquet`.
153
- This table has seven columns: `image_name`, `prompt`, `part_id`, `seed`, `step`, `cfg`, and `sampler`, and it has 2 million rows where each row represents an image. `seed`, `step`, and `cfg` are We choose Parquet because it is column-based: researchers can efficiently query individual columns (e.g., prompts) without reading the entire table. Below are the five random rows from the table.
154
-
155
- | image_name | prompt | part_id | seed | step | cfg | sampler |
156
- |------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|------------|------|-----|---------|
157
- | 49f1e478-ade6-49a8-a672-6e06c78d45fc.png | ryan gosling in fallout 4 kneels near a nuclear bomb | 1643 | 2220670173 | 50 | 7.0 | 8 |
158
- | b7d928b6-d065-4e81-bc0c-9d244fd65d0b.png | A beautiful robotic woman dreaming, cinematic lighting, soft bokeh, sci-fi, modern, colourful, highly detailed, digital painting, artstation, concept art, sharp focus, illustration, by greg rutkowski | 87 | 51324658 | 130 | 6.0 | 8 |
159
- | 19b1b2f1-440e-4588-ba96-1ac19888c4ba.png | bestiary of creatures from the depths of the unconscious psyche, in the style of a macro photograph with shallow dof | 754 | 3953796708 | 50 | 7.0 | 8 |
160
- | d34afa9d-cf06-470f-9fce-2efa0e564a13.png | close up portrait of one calico cat by vermeer. black background, three - point lighting, enchanting, realistic features, realistic proportions. | 1685 | 2007372353 | 50 | 7.0 | 8 |
161
- | c3a21f1f-8651-4a58-a4d4-7500d97651dc.png | a bottle of jack daniels with the word medicare replacing the word jack daniels | 243 | 1617291079 | 50 | 7.0 | 8 |
162
-
163
- To save space, we use an integer to encode the `sampler` in the table above.
164
-
165
- |Sampler|Integer Value|
166
- |:--|--:|
167
- |ddim|1|
168
- |plms|2|
169
- |k_euler|3|
170
- |k_euler_ancestral|4|
171
- |ddik_heunm|5|
172
- |k_dpm_2|6|
173
- |k_dpm_2_ancestral|7|
174
- |k_lms|8|
175
- |others|9|
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  ### Data Splits
178
 
179
- We split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file.
180
 
181
  ### Loading Data Subsets
182
 
183
- DiffusionDB is large (1.6TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
184
 
185
  #### Method 1: Using Hugging Face Datasets Loader
186
 
@@ -198,7 +255,7 @@ dataset = load_dataset('poloclub/diffusiondb', 'random_1k')
198
 
199
  This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
200
 
201
- #### Usage/Examples
202
 
203
  The script is run using command-line arguments as follows:
204
 
@@ -206,6 +263,7 @@ The script is run using command-line arguments as follows:
206
  - `-r` `--range` - Upper bound of range of files to download if `-i` is set.
207
  - `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
208
  - `-z` `--unzip` - Unzip the file/files after downloading
 
209
 
210
  ###### Downloading a single file
211
 
@@ -268,7 +326,7 @@ Recent diffusion models have gained immense popularity by enabling high-quality
268
  However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
269
 
270
  Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
271
- To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 2 million real prompt-image pairs.
272
 
273
  ### Source Data
274
 
@@ -308,7 +366,7 @@ It should note that we collect images and their prompts from the Stable Diffusio
308
 
309
  ### Discussion of Biases
310
 
311
- The 2 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
312
 
313
  ### Other Known Limitations
314
 
@@ -319,7 +377,7 @@ Therefore, different models can need users to write different prompts. For examp
319
 
320
  ### Dataset Curators
321
 
322
- DiffusionDB is created by [Jay Wang](https://zijie/wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
323
 
324
 
325
  ### Licensing Information
 
33
 
34
  # DiffusionDB
35
 
36
+ <img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
37
 
38
  ## Table of Contents
39
 
 
43
  - [Dataset Summary](#dataset-summary)
44
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
45
  - [Languages](#languages)
46
+ - [Two Subsets](#two-subsets)
47
+ - [Key Differences](#key-differences)
48
  - [Dataset Structure](#dataset-structure)
49
  - [Data Instances](#data-instances)
50
  - [Data Fields](#data-fields)
51
+ - [Dataset Metadata](#dataset-metadata)
52
+ - [Metadata Schema](#metadata-schema)
53
  - [Data Splits](#data-splits)
54
  - [Loading Data Subsets](#loading-data-subsets)
55
  - [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
 
89
 
90
  ### Dataset Summary
91
 
92
+ DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 14 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
93
 
94
  DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
95
 
 
101
 
102
  The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
103
 
104
+ ### Two Subsets
105
+
106
+ DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
107
+
108
+ |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
109
+ |:--|--:|--:|--:|--:|--:|
110
+ |DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
111
+ |DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
112
+
113
+ ##### Key Differences
114
+
115
+ 1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
116
+ 2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
117
+
118
  ## Dataset Structure
119
 
120
+ We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
121
 
122
  ```bash
123
+ # DiffusionDB 2M
124
  ./
125
  ├── images
126
  │   ├── part-000001
127
  │   │   ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
128
  │   │   ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
129
  │   │   ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
 
 
130
  │   │   ├── [...]
131
  │   │   └── part-000001.json
132
  │   ├── part-000002
133
  │   ├── part-000003
 
134
  │   ├── [...]
135
  │   └── part-002000
136
  └── metadata.parquet
137
  ```
138
 
139
+ ```bash
140
+ # DiffusionDB Large
141
+ ./
142
+ ├── diffusiondb-large-part-1
143
+ │   ├── part-000001
144
+ │   │   ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
145
+ │   │   ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
146
+ │   │   ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
147
+ │   │   ├── [...]
148
+ │   │   └── part-000001.json
149
+ │   ├── part-000002
150
+ │   ├── part-000003
151
+ │   ├── [...]
152
+ │   └── part-010000
153
+ ├── diffusiondb-large-part-2
154
+ │   ├── part-010001
155
+ │   │   ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
156
+ │   │   ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
157
+ │   │   ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
158
+ │   │   ├── [...]
159
+ │   │   └── part-000001.json
160
+ │   ├── part-010002
161
+ │   ├── part-010003
162
+ │   ├── [...]
163
+ │   └── part-014000
164
+ └── metadata-large.parquet
165
+ ```
166
+
167
+ These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
168
+
169
 
170
  ### Data Instances
171
 
 
194
  - `st`: Steps
195
  - `sa`: Sampler
196
 
197
+ ### Dataset Metadata
198
+
199
+ To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
200
+
201
+ The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
202
+
203
+ Below are three random rows from `metadata.parquet`.
204
+
205
+ | image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
206
+ |:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
207
+ | 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
208
+ | a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
209
+ | 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
210
+
211
+ #### Metadata Schema
212
+
213
+ `metadata.parquet` and `metatable-large.parquet` share the same schema.
214
+
215
+ |Column|Type|Description|
216
+ |:---|:---|:---|
217
+ |`image_name`|`string`|Image UUID filename.|
218
+ |`prompt`|`string`|The text prompt used to generate this image.|
219
+ |`part_id`|`uint16`|Folder ID of this image.|
220
+ |`seed`|`uint32`| Random seed used to generate this image.|
221
+ |`step`|`uint16`| Step count (hyperparameter).|
222
+ |`cfg`|`float32`| Guidance scale (hyperparameter).|
223
+ |`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: {1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}'.
224
+ |`width`|`uint16`|Image width.|
225
+ |`height`|`uint16`|Image height.|
226
+ |`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
227
+ |`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
228
+ |`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
229
+ |`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
230
+
231
+ > **Warning**
232
+ > Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores can be found in our [research paper](https://arxiv.org/abs/2210.14896). Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
233
 
234
  ### Data Splits
235
 
236
+ For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
237
 
238
  ### Loading Data Subsets
239
 
240
+ DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
241
 
242
  #### Method 1: Using Hugging Face Datasets Loader
243
 
 
255
 
256
  This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
257
 
258
+ ##### Usage/Examples
259
 
260
  The script is run using command-line arguments as follows:
261
 
 
263
  - `-r` `--range` - Upper bound of range of files to download if `-i` is set.
264
  - `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
265
  - `-z` `--unzip` - Unzip the file/files after downloading
266
+ - `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
267
 
268
  ###### Downloading a single file
269
 
 
326
  However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
327
 
328
  Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
329
+ To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
330
 
331
  ### Source Data
332
 
 
366
 
367
  ### Discussion of Biases
368
 
369
+ The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
370
 
371
  ### Other Known Limitations
372
 
 
377
 
378
  ### Dataset Curators
379
 
380
+ DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
381
 
382
 
383
  ### Licensing Information