Datasets:
Update README
Browse files
README.md
CHANGED
@@ -37,28 +37,43 @@ task_ids:
|
|
37 |
|
38 |
## Table of Contents
|
39 |
|
40 |
-
- [
|
41 |
-
- [
|
42 |
-
- [
|
43 |
-
|
44 |
-
- [
|
45 |
-
|
46 |
-
- [
|
47 |
-
|
48 |
-
- [
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
- [
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
- [
|
58 |
-
|
59 |
-
- [
|
60 |
-
|
61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
|
63 |
## Dataset Description
|
64 |
|
@@ -70,7 +85,7 @@ task_ids:
|
|
70 |
|
71 |
### Dataset Summary
|
72 |
|
73 |
-
DiffusionDB is the first
|
74 |
|
75 |
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
|
76 |
|
@@ -86,7 +101,7 @@ The text in the dataset is mostly English. It also contains other languages such
|
|
86 |
|
87 |
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
|
88 |
|
89 |
-
```
|
90 |
./
|
91 |
├── images
|
92 |
│ ├── part-000001
|
@@ -179,23 +194,53 @@ from datasets import load_dataset
|
|
179 |
dataset = load_dataset('poloclub/diffusiondb', 'random_1k')
|
180 |
```
|
181 |
|
182 |
-
#### Method 2.
|
183 |
|
184 |
-
|
185 |
|
186 |
-
|
187 |
|
188 |
-
|
189 |
-
|
190 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
191 |
|
192 |
-
|
193 |
-
part_id = 1
|
194 |
-
part_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/images/part-{part_id:06}.zip'
|
195 |
-
urlretrieve(part_url, f'part-{part_id:06}.zip')
|
196 |
|
197 |
-
|
198 |
-
|
199 |
```
|
200 |
|
201 |
#### Method 3. Use `metadata.parquet` (Text Only)
|
@@ -218,7 +263,7 @@ metadata_df = pd.read_parquet('metadata.parquet')
|
|
218 |
|
219 |
### Curation Rationale
|
220 |
|
221 |
-
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create
|
222 |
|
223 |
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
|
224 |
|
|
|
37 |
|
38 |
## Table of Contents
|
39 |
|
40 |
+
- [DiffusionDB](#diffusiondb)
|
41 |
+
- [Table of Contents](#table-of-contents)
|
42 |
+
- [Dataset Description](#dataset-description)
|
43 |
+
- [Dataset Summary](#dataset-summary)
|
44 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
45 |
+
- [Languages](#languages)
|
46 |
+
- [Dataset Structure](#dataset-structure)
|
47 |
+
- [Data Instances](#data-instances)
|
48 |
+
- [Data Fields](#data-fields)
|
49 |
+
- [Data Splits](#data-splits)
|
50 |
+
- [Loading Data Subsets](#loading-data-subsets)
|
51 |
+
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
|
52 |
+
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
|
53 |
+
- [Usage/Examples](#usageexamples)
|
54 |
+
- [Downloading a single file](#downloading-a-single-file)
|
55 |
+
- [Downloading a range of files](#downloading-a-range-of-files)
|
56 |
+
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
|
57 |
+
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
|
58 |
+
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
|
59 |
+
- [Dataset Creation](#dataset-creation)
|
60 |
+
- [Curation Rationale](#curation-rationale)
|
61 |
+
- [Source Data](#source-data)
|
62 |
+
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
63 |
+
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
64 |
+
- [Annotations](#annotations)
|
65 |
+
- [Annotation process](#annotation-process)
|
66 |
+
- [Who are the annotators?](#who-are-the-annotators)
|
67 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
68 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
69 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
70 |
+
- [Discussion of Biases](#discussion-of-biases)
|
71 |
+
- [Other Known Limitations](#other-known-limitations)
|
72 |
+
- [Additional Information](#additional-information)
|
73 |
+
- [Dataset Curators](#dataset-curators)
|
74 |
+
- [Licensing Information](#licensing-information)
|
75 |
+
- [Citation Information](#citation-information)
|
76 |
+
- [Contributions](#contributions)
|
77 |
|
78 |
## Dataset Description
|
79 |
|
|
|
85 |
|
86 |
### Dataset Summary
|
87 |
|
88 |
+
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
|
89 |
|
90 |
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
|
91 |
|
|
|
101 |
|
102 |
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
|
103 |
|
104 |
+
```bash
|
105 |
./
|
106 |
├── images
|
107 |
│ ├── part-000001
|
|
|
194 |
dataset = load_dataset('poloclub/diffusiondb', 'random_1k')
|
195 |
```
|
196 |
|
197 |
+
#### Method 2. Use the PoloClub Downloader
|
198 |
|
199 |
+
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
|
200 |
|
201 |
+
#### Usage/Examples
|
202 |
|
203 |
+
The script is run using command-line arguments as follows:
|
204 |
+
|
205 |
+
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
|
206 |
+
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
|
207 |
+
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
|
208 |
+
- `-z` `--unzip` - Unzip the file/files after downloading
|
209 |
+
|
210 |
+
###### Downloading a single file
|
211 |
+
|
212 |
+
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
|
213 |
+
|
214 |
+
```bash
|
215 |
+
python download.py -i 23
|
216 |
+
```
|
217 |
+
|
218 |
+
###### Downloading a range of files
|
219 |
+
|
220 |
+
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
|
221 |
+
|
222 |
+
```bash
|
223 |
+
python download.py -i 1 -r 2000
|
224 |
+
```
|
225 |
+
|
226 |
+
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
|
227 |
+
|
228 |
+
###### Downloading to a specific directory
|
229 |
+
|
230 |
+
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
|
231 |
+
|
232 |
+
```bash
|
233 |
+
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
|
234 |
+
```
|
235 |
+
|
236 |
+
Again, the script will automatically add the `/` between the directory and the file when it downloads.
|
237 |
+
|
238 |
+
###### Setting the files to unzip once they've been downloaded
|
239 |
|
240 |
+
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
|
|
|
|
|
|
|
241 |
|
242 |
+
```bash
|
243 |
+
python download.py -i 1 -r 2000 -z
|
244 |
```
|
245 |
|
246 |
#### Method 3. Use `metadata.parquet` (Text Only)
|
|
|
263 |
|
264 |
### Curation Rationale
|
265 |
|
266 |
+
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
|
267 |
|
268 |
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
|
269 |
|