configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: task
dtype: string
- name: split
dtype: string
- name: idx
dtype: int64
- name: hash
dtype: string
- name: input_caption
dtype: string
- name: output_caption
dtype: string
splits:
- name: validation
num_bytes: 766577760.29
num_examples: 2022
- name: test
num_bytes: 1353975788
num_examples: 3589
download_size: 1904999338
dataset_size: 2120553548.29
Dataset Card for the Emu Edit Test Set
Table of Contents
Dataset Description
- Homepage: https://emu-edit.metademolab.com/
- Paper: TODO
Dataset Summary
To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture). Then, we utilize the diverse set of input images from the MagicBrush benchmark, and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions. Moreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions. Finally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example. When doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction. Additionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set here. For more details please see our paper and project page.
Licensing Information
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.
Citation Information
TODO