Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ size_categories:
|
|
22 |
ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference.
|
23 |
It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB.
|
24 |
To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and
|
25 |
-
annotator training, optimizing labeling experience, and ensuring quality validation. And ImageRewardDB is now
|
26 |
[π€ Hugging Face Dataset](https://huggingface.co/datasets/wuyuchen/ImageRewardDB).
|
27 |
|
28 |
### Languages
|
@@ -33,7 +33,7 @@ The text in the dataset is all in English.
|
|
33 |
|
34 |
Considering that the ImageRewardDB contains a large number of images, we provide four subsets in different scales to support different needs.
|
35 |
For all subsets, the validation and test splits remain the same. The validation split(1.08GB) contains 412 prompts and 3.2K images and
|
36 |
-
the test(1.14GB) split
|
37 |
|
38 |
|Subset|Num of Images|Num of Prompts|Size|
|
39 |
|:--|--:|--:|--:|
|
@@ -44,10 +44,10 @@ the test(1.14GB) split cotains 466 prompts and 3.4K images. The information of t
|
|
44 |
|
45 |
## Dataset Structure
|
46 |
|
47 |
-
All the data in this repository is stored in a well
|
48 |
-
stored in corresponding directories under "./images" according to its split. Each folder contains around 500 prompts,
|
49 |
images, and a JSON file. The JSON file links the image with its corresponding prompt and annotation.
|
50 |
-
The file structure is as
|
51 |
```
|
52 |
# ImageRewardDB
|
53 |
./
|
@@ -70,8 +70,8 @@ The file structure is as following:
|
|
70 |
βββ metadata-validation.parquet
|
71 |
βββ metadata-test.parquet
|
72 |
```
|
73 |
-
The sub-folders have the name of {split_name}_{part_id}, and the JSON file
|
74 |
-
Each image is a lossless WebP file
|
75 |
|
76 |
### Data Instances
|
77 |
|
@@ -100,12 +100,12 @@ For instance, below is the image of `1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp`
|
|
100 |
* image_amount_in_total: Total amount of images related to the prompt
|
101 |
* rank: The relative rank of the image in all related images
|
102 |
* overall_rating: The overall score of this image
|
103 |
-
* image_text_alignment_rating: The score of how well the generated image
|
104 |
* fidelity_rating: The score of whether the output image is true to the shape and characteristics that the object should have
|
105 |
|
106 |
### Data Splits
|
107 |
|
108 |
-
As we mentioned above, all scales of the subsets we provided have three
|
109 |
And all the subsets share the same validation and test splits.
|
110 |
|
111 |
### Dataset Metadata
|
@@ -114,7 +114,7 @@ We also include three metadata tables `metadata-train.parquet`, `metadata-valida
|
|
114 |
help you access and comprehend ImageRewardDB without downloading the Zip files.
|
115 |
|
116 |
All the tables share the same schema, and each row refers to an image. The schema is shown below,
|
117 |
-
and actually the JSON files we mentioned above share the same schema:
|
118 |
|
119 |
|Column|Type|Description|
|
120 |
|:---|:---|:---|
|
@@ -125,10 +125,10 @@ and actually the JSON files we mentioned above share the same schema:
|
|
125 |
|`image_amount_in_total`|`int`| Total amount of images related to the prompt.|
|
126 |
|`rank`|`int`| The relative rank of the image in all related images.|
|
127 |
|`overall_rating`|`int`| The overall score of this image.
|
128 |
-
|`image_text_alignment_rating`|`int`|The score of how well the generated image
|
129 |
|`fidelity_rating`|`int`|The score of whether the output image is true to the shape and characteristics that the object should have.|
|
130 |
|
131 |
-
Below
|
132 |
|
133 |
|image_path|prompt_id|prompt|classification|image_amount_in_total|rank|overall_rating|image_text_alignment_rating|fidelity_rating|
|
134 |
|:---|:---|:---|:---|:---|:---|:---|:---|:---|
|
|
|
22 |
ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference.
|
23 |
It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB.
|
24 |
To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and
|
25 |
+
annotator training, optimizing labeling experience, and ensuring quality validation. And ImageRewardDB is now publicly available at
|
26 |
[π€ Hugging Face Dataset](https://huggingface.co/datasets/wuyuchen/ImageRewardDB).
|
27 |
|
28 |
### Languages
|
|
|
33 |
|
34 |
Considering that the ImageRewardDB contains a large number of images, we provide four subsets in different scales to support different needs.
|
35 |
For all subsets, the validation and test splits remain the same. The validation split(1.08GB) contains 412 prompts and 3.2K images and
|
36 |
+
the test(1.14GB) split contains 466 prompts and 3.4K images. The information on the train split in different scales is as follows:
|
37 |
|
38 |
|Subset|Num of Images|Num of Prompts|Size|
|
39 |
|:--|--:|--:|--:|
|
|
|
44 |
|
45 |
## Dataset Structure
|
46 |
|
47 |
+
All the data in this repository is stored in a well-organized way. The 62.6K images in ImageRewardDB are split into several folders,
|
48 |
+
stored in corresponding directories under "./images" according to its split. Each folder contains around 500 prompts, their corresponding
|
49 |
images, and a JSON file. The JSON file links the image with its corresponding prompt and annotation.
|
50 |
+
The file structure is as follows:
|
51 |
```
|
52 |
# ImageRewardDB
|
53 |
./
|
|
|
70 |
βββ metadata-validation.parquet
|
71 |
βββ metadata-test.parquet
|
72 |
```
|
73 |
+
The sub-folders have the name of {split_name}_{part_id}, and the JSON file has the same name as the sub-folder.
|
74 |
+
Each image is a lossless WebP file and has a unique name generated by [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
|
75 |
|
76 |
### Data Instances
|
77 |
|
|
|
100 |
* image_amount_in_total: Total amount of images related to the prompt
|
101 |
* rank: The relative rank of the image in all related images
|
102 |
* overall_rating: The overall score of this image
|
103 |
+
* image_text_alignment_rating: The score of how well the generated image matches the given text
|
104 |
* fidelity_rating: The score of whether the output image is true to the shape and characteristics that the object should have
|
105 |
|
106 |
### Data Splits
|
107 |
|
108 |
+
As we mentioned above, all scales of the subsets we provided have three splits of "train", "validation", and "test".
|
109 |
And all the subsets share the same validation and test splits.
|
110 |
|
111 |
### Dataset Metadata
|
|
|
114 |
help you access and comprehend ImageRewardDB without downloading the Zip files.
|
115 |
|
116 |
All the tables share the same schema, and each row refers to an image. The schema is shown below,
|
117 |
+
and actually, the JSON files we mentioned above share the same schema:
|
118 |
|
119 |
|Column|Type|Description|
|
120 |
|:---|:---|:---|
|
|
|
125 |
|`image_amount_in_total`|`int`| Total amount of images related to the prompt.|
|
126 |
|`rank`|`int`| The relative rank of the image in all related images.|
|
127 |
|`overall_rating`|`int`| The overall score of this image.
|
128 |
+
|`image_text_alignment_rating`|`int`|The score of how well the generated image matches the given text.|
|
129 |
|`fidelity_rating`|`int`|The score of whether the output image is true to the shape and characteristics that the object should have.|
|
130 |
|
131 |
+
Below is an example row from metadata-train.parquet.
|
132 |
|
133 |
|image_path|prompt_id|prompt|classification|image_amount_in_total|rank|overall_rating|image_text_alignment_rating|fidelity_rating|
|
134 |
|:---|:---|:---|:---|:---|:---|:---|:---|:---|
|