jainr3 commited on
Commit
f413812
β€’
1 Parent(s): 6b3bee3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -45
README.md CHANGED
@@ -18,7 +18,7 @@ pretty_name: DiffusionDB-Pixelart
18
  size_categories:
19
  - n>1T
20
  source_datasets:
21
- - original
22
  tags:
23
  - stable diffusion
24
  - prompt engineering
@@ -84,7 +84,6 @@ task_ids:
84
  - **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
85
  - **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
86
  - **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
87
- - **Point of Contact:** [Jay Wang](mailto:jayw@gatech.edu)
88
 
89
  ### Dataset Summary
90
 
@@ -102,23 +101,22 @@ The unprecedented scale and diversity of this human-actuated dataset provide exc
102
 
103
  The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
104
 
105
- ### Two Subsets
106
 
107
- DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
108
 
109
  |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
110
  |:--|--:|--:|--:|--:|--:|
111
- |DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
112
- |DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
113
 
114
- ##### Key Differences
115
 
116
  1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
117
- 2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
118
 
119
  ## Dataset Structure
120
 
121
- We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
122
 
123
  ```bash
124
  # DiffusionDB 2M
@@ -137,35 +135,7 @@ We use a modularized file structure to distribute DiffusionDB. The 2 million ima
137
  └── metadata.parquet
138
  ```
139
 
140
- ```bash
141
- # DiffusionDB Large
142
- ./
143
- β”œβ”€β”€ diffusiondb-large-part-1
144
- β”‚ β”œβ”€β”€ part-000001
145
- β”‚ β”‚ β”œβ”€β”€ 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
146
- β”‚ β”‚ β”œβ”€β”€ 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
147
- β”‚ β”‚ β”œβ”€β”€ 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
148
- β”‚ β”‚ β”œβ”€β”€ [...]
149
- β”‚ β”‚ └── part-000001.json
150
- β”‚ β”œβ”€β”€ part-000002
151
- β”‚ β”œβ”€β”€ part-000003
152
- β”‚ β”œβ”€β”€ [...]
153
- β”‚ └── part-010000
154
- β”œβ”€β”€ diffusiondb-large-part-2
155
- β”‚ β”œβ”€β”€ part-010001
156
- β”‚ β”‚ β”œβ”€β”€ 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
157
- β”‚ β”‚ β”œβ”€β”€ 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
158
- β”‚ β”‚ β”œβ”€β”€ 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
159
- β”‚ β”‚ β”œβ”€β”€ [...]
160
- β”‚ β”‚ └── part-000001.json
161
- β”‚ β”œβ”€β”€ part-010002
162
- β”‚ β”œβ”€β”€ part-010003
163
- β”‚ β”œβ”€β”€ [...]
164
- β”‚ └── part-014000
165
- └── metadata-large.parquet
166
- ```
167
-
168
- These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
169
 
170
 
171
  ### Data Instances
@@ -197,9 +167,9 @@ For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` an
197
 
198
  ### Dataset Metadata
199
 
200
- To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
201
 
202
- The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
203
 
204
  Below are three random rows from `metadata.parquet`.
205
 
@@ -211,7 +181,7 @@ Below are three random rows from `metadata.parquet`.
211
 
212
  #### Metadata Schema
213
 
214
- `metadata.parquet` and `metatable-large.parquet` share the same schema.
215
 
216
  |Column|Type|Description|
217
  |:---|:---|:---|
@@ -236,11 +206,11 @@ Below are three random rows from `metadata.parquet`.
236
 
237
  ### Data Splits
238
 
239
- For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
240
 
241
  ### Loading Data Subsets
242
 
243
- DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
244
 
245
  #### Method 1: Using Hugging Face Datasets Loader
246
 
@@ -251,7 +221,7 @@ import numpy as np
251
  from datasets import load_dataset
252
 
253
  # Load the dataset with the `large_random_1k` subset
254
- dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
255
  ```
256
 
257
  #### Method 2. Use the PoloClub Downloader
@@ -402,4 +372,4 @@ The Python code in this repository is available under the [MIT License](https://
402
 
403
  ### Contributions
404
 
405
- If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
 
18
  size_categories:
19
  - n>1T
20
  source_datasets:
21
+ - modified
22
  tags:
23
  - stable diffusion
24
  - prompt engineering
 
84
  - **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
85
  - **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
86
  - **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
 
87
 
88
  ### Dataset Summary
89
 
 
101
 
102
  The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
103
 
104
+ ### Subset
105
 
106
+ DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data taken from the DiffusionDB 2M and has 2000 examples only.
107
 
108
  |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
109
  |:--|--:|--:|--:|--:|--:|
110
+ |DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
 
111
 
112
+ ##### Key Facts
113
 
114
  1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
115
+ 2. Images in DiffusionDB 2M are stored in `png` format.
116
 
117
  ## Dataset Structure
118
 
119
+ We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
120
 
121
  ```bash
122
  # DiffusionDB 2M
 
135
  └── metadata.parquet
136
  ```
137
 
138
+ These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
 
141
  ### Data Instances
 
167
 
168
  ### Dataset Metadata
169
 
170
+ To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart.
171
 
172
+ The shape of `metadata.parquet` is (2000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
173
 
174
  Below are three random rows from `metadata.parquet`.
175
 
 
181
 
182
  #### Metadata Schema
183
 
184
+ `metadata.parquet` schema:
185
 
186
  |Column|Type|Description|
187
  |:---|:---|:---|
 
206
 
207
  ### Data Splits
208
 
209
+ For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file.
210
 
211
  ### Loading Data Subsets
212
 
213
+ DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
214
 
215
  #### Method 1: Using Hugging Face Datasets Loader
216
 
 
221
  from datasets import load_dataset
222
 
223
  # Load the dataset with the `large_random_1k` subset
224
+ dataset = load_dataset('jainr3/diffusiondb-pixelart', 'large_random_1k')
225
  ```
226
 
227
  #### Method 2. Use the PoloClub Downloader
 
372
 
373
  ### Contributions
374
 
375
+ If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang).