yuvalkirstain commited on
Commit
9f9c8a1
1 Parent(s): 779b904

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -2
README.md CHANGED
@@ -36,6 +36,37 @@ dataset_info:
36
  download_size: 1904999338
37
  dataset_size: 2120553548.29
38
  ---
39
- # Dataset Card for "emu_edit_test_set"
40
 
41
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  download_size: 1904999338
37
  dataset_size: 2120553548.29
38
  ---
 
39
 
40
+ # Dataset Card for the Emu Edit Test Set
41
+
42
+
43
+ ## Table of Contents
44
+ - [Table of Contents](#table-of-contents)
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Additional Information](#additional-information)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage: https://emu-edit.metademolab.com/**
54
+ - **Paper: TODO**
55
+
56
+ ### Dataset Summary
57
+
58
+ To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).
59
+ Then, we utilize the diverse set of input images from the [MagicBrush benchmark](https://huggingface.co/datasets/osunlp/MagicBrush), and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.
60
+ Moreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.
61
+ Finally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example.
62
+ When doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.
63
+ For more details please see our [paper](TODO) and [project page](https://emu-edit.metademolab.com/)
64
+
65
+
66
+ ### Licensing Information
67
+
68
+ Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.
69
+
70
+ ### Citation Information
71
+
72
+ TODO