Datasets:
File size: 18,396 Bytes
5ca1e2c e6a38b2 5ca1e2c e6a38b2 5ca1e2c e6a38b2 5ca1e2c e6a38b2 5ca1e2c eaf726e 5ca1e2c eda0d86 8159d87 eda0d86 5ca1e2c 33b6c5e b6c6aa3 7ecb576 b6c6aa3 0e1ebcf b6c6aa3 9468c37 d85817f f65e504 2d31689 b6c6aa3 0e1ebcf 2d31689 d3628c9 0e1ebcf 2d31689 d3628c9 b6c6aa3 f90b1c0 b6c6aa3 cb9e04f b6c6aa3 7d1f8c6 b6c6aa3 cb9e04f b6c6aa3 7d1f8c6 b6c6aa3 2d31689 d3628c9 f90b1c0 d4326a2 a030986 d3628c9 e77afd0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 |
---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: word_scores
dtype: string
- name: alignment_score_norm
dtype: float32
- name: coherence_score_norm
dtype: float32
- name: style_score_norm
dtype: float32
- name: alignment_heatmap
sequence:
sequence: float16
- name: coherence_heatmap
sequence:
sequence: float16
- name: alignment_score
dtype: float32
- name: coherence_score
dtype: float32
- name: style_score
dtype: float32
splits:
- name: train
num_bytes: 25257389633.104
num_examples: 13024
download_size: 17856619960
dataset_size: 25257389633.104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
- text-classification
- image-classification
- image-to-text
- image-segmentation
language:
- en
tags:
- t2i
- preferences
- human
- flux
- midjourney
- imagen
- dalle
- heatmap
- coherence
- alignment
- style
- plausiblity
pretty_name: Rich Human Feedback for Text to Image Models
size_categories:
- 1M<n<10M
---
<a href="https://www.rapidata.ai">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66f5624c42b853e73e0738eb/jfxR79bOztqaC6_yNNnGU.jpeg" width="250" alt="Rapidata Logo">
</a>
Building upon Google's research [Rich Human Feedback for Text-to-Image Generation](https://arxiv.org/abs/2312.10240) we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the [Python API](https://docs.rapidata.ai/). Collection took roughly 5 days.
If you get value from this dataset and would like to see more in the future, please consider liking it.
# Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.
If you want to replicate the annotation setup, the steps are outlined at the [bottom](#replicating-the-annotation-setup).
This dataset and the annotation process is described in further detail in our blog post [Beyond Image Preferences](https://huggingface.co/blog/RapidataAI/beyond-image-preferences).
# Usage Examples
Accessing this data is easy with the Huggingface `dataset` library. For quick demos or previews, we recommend setting `streaming=True` as downloading the whole dataset can take a while.
```python
from datasets import load_dataset
ds = load_dataset("Rapidata/text-2-image-Rich-Human-Feedback", split="train", streaming=True)
```
As an example, below we show how to replicate the figures below.
<details>
<summary>Click to expand Select Words example</summary>
The methods below can be used to produce figures similar to the ones shownn below.
Note however that the figures below were created using `matplotlib`, however we opt to use `opencv` here as it makes calculating the text spacing much easier.
**Methods**
```python
from PIL import Image
from datasets import load_dataset
import cv2
import numpy as np
def get_colors(words):
colors = []
for item in words:
intensity = item / max(words)
value = np.uint8((1 - intensity) * 255)
color = tuple(map(int, cv2.applyColorMap(np.array([[value]]), cv2.COLORMAP_AUTUMN)[0][0]))
colors.append(color)
return colors
def get_wrapped_text(text_color_pairs, font, font_scale, thickness, word_spacing, max_width):
wrapped_text_color_pairs, current_line, line_width = [], [], 0
for text, color in text_color_pairs:
text_size = cv2.getTextSize(text, font, font_scale, thickness)[0]
if line_width + text_size[0] > max_width:
wrapped_text_color_pairs.append(current_line)
current_line, line_width = [], 0
current_line.append((text, color, text_size))
line_width += text_size[0] + word_spacing
wrapped_text_color_pairs.append(current_line)
return wrapped_text_color_pairs
def add_multicolor_text(input, text_color_pairs, font_scale=1, thickness=2, word_spacing=20):
image = cv2.cvtColor(np.array(input), cv2.COLOR_RGB2BGR)
image_height, image_width, _ = image.shape
font = cv2.FONT_HERSHEY_SIMPLEX
wrapped_text = get_wrapped_text(text_color_pairs, font, font_scale, thickness, word_spacing, int(image_width*0.95))
position = (int(0.025*image_width), int(word_spacing*2))
overlay = image.copy()
cv2.rectangle(overlay, (0, 0), (image_width, int((len(wrapped_text)+1)*word_spacing*2)), (100,100,100), -1)
out_img = cv2.addWeighted(overlay, 0.75, image, 0.25, 0)
for idx, text_line in enumerate(wrapped_text):
current_x, current_y = position[0], position[1] + int(idx*word_spacing*2)
for text, color, text_size in text_line:
cv2.putText(out_img, text, (current_x, current_y), font, font_scale, color, thickness)
current_x += text_size[0] + word_spacing
return Image.fromarray(cv2.cvtColor(out_img, cv2.COLOR_BGR2RGB))
```
**Create figures**
```python
ds_words = ds.select_columns(["image","prompt", "word_scores"])
for example in ds_words.take(5):
image = example["image"]
prompt = example["prompt"]
word_scores = [s[1] for s in eval(example["word_scores"])]
words = [s[0] for s in eval(example["word_scores"])]
colors = get_colors(word_scores)
display(add_multicolor_text(image, list(zip(words, colors)), font_scale=1, thickness=2, word_spacing=20))
```
</details>
<details>
<summary>Click to expand Heatmap example</summary>
**Methods**
```python
import cv2
import numpy as np
from PIL import Image
def overlay_heatmap(image, heatmap, alpha=0.3):
cv2_image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
heatmap_normalized = ((heatmap - heatmap.min()) / (heatmap.max() - heatmap.min()))
heatmap_normalized = np.uint8(255 * (heatmap_normalized))
heatmap_colored = cv2.applyColorMap(heatmap_normalized, cv2.COLORMAP_HOT)
overlaid_image = cv2.addWeighted(cv2_image, 1 - alpha, heatmap_colored, alpha, 0)
return Image.fromarray(cv2.cvtColor(overlaid_image, cv2.COLOR_BGR2RGB))
```
**Create figures**
```python
ds_heatmap = ds.select_columns(["image","prompt", "alignment_heatmap"])
for example in ds_heatmap.take(5):
image = example["image"]
heatmap = example["alignment_heatmap"]
if heatmap:
display(overlay_heatmap(image, np.asarray(heatmap)))
```
</details>
</br>
# Data Summary
## Word Scores
Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
### Examples Results:
| <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/lzlWHmLKBvBJhjGWP8xZZ.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/b38uskYWaGEgfeJQtKiaO.png" width="500"> |
|---|---|
| <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/4uWKVjZBA5aX2YDUYNpdV.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/f9JIuwDoNohy7EkDYILFm.png" width="500"> |
## Coherence
The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received at least 21 responses indicating the level of coherence on a scale of 1-5, which were then averaged to produce the final scores where 5 indicates the highest coherence.
Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
### Example Results:
| <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/sc-4ls9X0yO-hGN0VCDSX.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/J77EmYp4oyRRakkcRnaF9.png" width="500"> |
|---|---|
| <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/mRDdoQdc4_iy2JcLhdI7J.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/2N2KJyz4YOGT6N6tuUX8M.png" width="500"> |
## Alignment
The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". Again, each image received at least 21 responses indicating the level of alignment on a scale of 1-5 (5 being the highest), which were then averaged.
For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
As mentioned in the google paper, aligment is harder to annotate consistently, if e.g. an object is missing, it is unclear to the annotators what they need to highlight.
### Example Results:
<style>
.example-results-grid {
display: grid;
grid-template-columns: repeat(2, 450px);
gap: 20px;
margin: 20px 0;
justify-content: left;
}
.result-card {
background-color: #fff;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
padding: 15px;
width: 450px;
}
.prompt {
margin-bottom: 10px;
font-size: 18px;
line-height: 1.4;
color: #333;
background-color: #f8f8f8;
padding: 10px;
border-radius: 5px;
}
.image-container img {
width: 450px;
height: auto;
border-radius: 4px;
}
@media (max-width: 1050px) {
.example-results-grid {
grid-template-columns: 450px;
}
}
</style>
<div class="example-results-grid">
<div class="result-card">
<div class="prompt">
<strong>Prompt:</strong> Three cats and one dog sitting on the grass.
</div>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/qCNWVSNjPsp8XQ3zliLcp.png" alt="Three cats and one dog">
</div>
</div>
<div class="result-card">
<div class="prompt">
<strong>Prompt:</strong> A brown toilet with a white wooden seat.
</div>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/M3buzP-5k4pRCxOi_ijxM.png" alt="Brown toilet">
</div>
</div>
<div class="result-card">
<div class="prompt">
<strong>Prompt:</strong> Photograph of a pale Asian woman, wearing an oriental costume, sitting in a luxurious white chair. Her head is floating off the chair, with the chin on the table and chin on her knees, her chin on her knees. Closeup
</div>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/ggYXUEbGppiTeL84pG-DP.png" alt="Asian woman in costume">
</div>
</div>
<div class="result-card">
<div class="prompt">
<strong>Prompt:</strong> A tennis racket underneath a traffic light.
</div>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/mT7sAbnO-w6ySXaeEqEki.png" alt="Racket under traffic light">
</div>
</div>
</div>
## Style
The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses grading on a scale of 1-5, which were then averaged.
In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
# About Rapidata
Rapidata's technology makes collecting human feedback at scale faster and more accessible than ever before. Visit [rapidata.ai](https://www.rapidata.ai/) to learn more about how we're revolutionizing human feedback collection for AI development.
# Other Datasets
We run a benchmark of the major image generation models, the results can be found on our [website](https://www.rapidata.ai/leaderboard/image-models). We rank the models according to their coherence/plausiblity, their aligment with the given prompt and style prefernce. The underlying 2M+ annotations can be found here:
- Link to the [Coherence dataset](https://huggingface.co/datasets/Rapidata/Flux_SD3_MJ_Dalle_Human_Coherence_Dataset)
- Link to the [Text-2-Image Alignment dataset](https://huggingface.co/datasets/Rapidata/Flux_SD3_MJ_Dalle_Human_Alignment_Dataset)
- Link to the [Preference dataset](https://huggingface.co/datasets/Rapidata/700k_Human_Preference_Dataset_FLUX_SD3_MJ_DALLE3)
We have also started to run a [video generation benchmark](https://www.rapidata.ai/leaderboard/video-models), it is still a work in progress and currently only covers 2 models. They are also analysed in coherence/plausiblity, alignment and style preference.
# Replicating the Annotation Setup
For researchers interested in producing their own rich preference dataset, you can directly use the Rapidata API through python. The code snippets below show how to replicate the modalities used in the dataset. Additional information is available through the [documentation](https://docs.rapidata.ai/)
<details>
<summary>Creating the Rapidata Client and Downloading the Dataset</summary>
First install the rapidata package, then create the RapidataClient() this will be used create and launch the annotation setup
```bash
pip install rapidata
```
```python
from rapidata import RapidataClient, LabelingSelection, ValidationSelection
client = RapidataClient()
```
As example data we will just use images from the dataset. Make sure to set `streaming=True` as downloading the whole dataset might take a significant amount of time.
```python
from datasets import load_dataset
ds = load_dataset("Rapidata/text-2-image-Rich-Human-Feedback", split="train", streaming=True)
ds = ds.select_columns(["image","prompt"])
```
Since we use streaming, we can extract the prompts and download the images we need like this:
```python
import os
tmp_folder = "demo_images"
# make folder if it doesn't exist
if not os.path.exists(tmp_folder):
os.makedirs(tmp_folder)
prompts = []
image_paths = []
for i, row in enumerate(ds.take(10)):
prompts.append(row["prompt"])
# save image to disk
save_path = os.path.join(tmp_folder, f"{i}.jpg")
row["image"].save(save_path)
image_paths.append(save_path)
```
</details>
<details>
<summary>Likert Scale Alignment Score</summary>
To launch a likert scale annotation order, we make use of the classification annotation modality. Below we show the setup for the alignment criteria.
The structure is the same for style and coherence, however arguments have to be adjusted of course. I.e. different instructions, options and validation set.
```python
# Alignment Example
instruction = "How well does the image match the description?"
answer_options = [
"1: Not at all",
"2: A little",
"3: Moderately",
"4: Very well",
"5: Perfectly"
]
order = client.order.create_classification_order(
name="Alignment Example",
instruction=instruction,
answer_options=answer_options,
datapoints=image_paths,
contexts=prompts, # for alignment, prompts are required as context for the annotators.
responses_per_datapoint=10,
selections=[ValidationSelection("676199a5ef7af86285630ea6"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run() # This starts the order. Follow the printed link to see progress.
```
</details>
<details>
<summary>Alignment Heatmap</summary>
To produce heatmaps, we use the locate annotation modality. Below is the setup used for creating the alignment heatmaps.
```python
# alignment heatmap
# Note that the selected images may not actually have severely misaligned elements, but this is just for demonstration purposes.
order = client.order.create_locate_order(
name="Alignment Heatmap Example",
instruction="What part of the image does not match with the description? Tap to select.",
datapoints=image_paths,
contexts=prompts, # for alignment, prompts are required as context for the annotators.
responses_per_datapoint=10,
selections=[ValidationSelection("67689e58026456ec851f51f8"), LabelingSelection(1)] # here we use a pre-defined validation set for alignment. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run() # This starts the order. Follow the printed link to see progress.
```
</details>
<details>
<summary>Select Misaligned Words</summary>
To launch the annotation setup for selection of misaligned words, we used the following setup
```python
# Select words example
from rapidata import LanguageFilter
select_words_prompts = [p + " [No_Mistake]" for p in prompts]
order = client.order.create_select_words_order(
name="Select Words Example",
instruction = "The image is based on the text below. Select mistakes, i.e., words that are not aligned with the image.",
datapoints=image_paths,
sentences=select_words_prompts,
responses_per_datapoint=10,
filters=[LanguageFilter(["en"])], # here we add a filter to ensure only english speaking annotators are selected
selections=[ValidationSelection("6761a86eef7af86285630ea8"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
)
order.run()
```
</details>
|