Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
License:
JialingYK commited on
Commit
734a53b
1 Parent(s): 22401f3

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md CHANGED
@@ -283,3 +283,58 @@ configs:
283
  - split: V1
284
  path: 9_change_style_80/V1-*
285
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
283
  - split: V1
284
  path: 9_change_style_80/V1-*
285
  ---
286
+ ## What is PIE-Bench++?
287
+
288
+ PIE-Bench++ builds upon the foundation laid by the original [PIE-Bench dataset](https://cure-lab.github.io/PnPInversion) introduced by (Ju et al., 2024), designed to provide a comprehensive benchmark for multi-aspect image editing evaluation. This enhanced dataset contains 700 images and prompts across nine distinct edit categories, encompassing a wide range of manipulations:
289
+
290
+ - **Object-Level Manipulations:** Additions, removals, and modifications of objects within the image.
291
+ - **Attribute-Level Manipulations:** Changes in content, pose, color, and material of objects.
292
+ - **Image-Level Manipulations:** Adjustments to the background and overall style of the image.
293
+
294
+ While retaining the original images, the enhanced dataset features revised source prompts and editing prompts, augmented with additional metadata such as editing types and aspect mapping. This comprehensive augmentation aims to facilitate more nuanced and detailed evaluations in the domain of multi-aspect image editing.
295
+
296
+
297
+ ## Data Annotation Guide
298
+
299
+ ### Overview
300
+
301
+ Our dataset annotations are structured to provide comprehensive information for each image, facilitating a deeper understanding of the editing process. Each annotation consists of the following key elements:
302
+
303
+ - **Source Prompt:** The original description or caption of the image before any edits are made.
304
+ - **Target Prompt:** The description or caption of the image after the edits are applied.
305
+ - **Edit Action:** A detailed specification of the changes made to the image, including:
306
+ - The position index in the source prompt where changes occur.
307
+ - The type of edit applied (e.g., 1: change object, 2: add object, 3: remove object, 4: change attribute content, 5: change attribute pose, 6: change attribute color, 7: change attribute material, 8: change background, 9: change style).
308
+ - The operation required to achieve the desired outcome (e.g., '+' / '-' means adding/removing words at the specified position, and 'xxx' means replacing the existing words).
309
+ - **Aspect Mapping:** A mapping that connects objects undergoing editing to their respective modified attributes. This helps identify which objects are subject to editing and the specific attributes that are altered.
310
+
311
+ ### Example Annotation
312
+
313
+ Here is an example annotation for an image in our dataset:
314
+
315
+ ```json
316
+ {
317
+ "000000000002": {
318
+ "image_path": "0_random_140/000000000002.jpg",
319
+ "source_prompt": "a cat sitting on a wooden chair",
320
+ "target_prompt": "a [red] [dog] [with flowers in mouth] [standing] on a [metal] chair",
321
+ "edit_action": {
322
+ "red": {"position": 1, "edit_type": 6, "action": "+"},
323
+ "dog": {"position": 1, "edit_type": 1, "action": "cat"},
324
+ "with flowers in mouth": {"position": 2, "edit_type": 2, "action": "+"},
325
+ "standing": {"position": 2, "edit_type": 5, "action": "sitting"},
326
+ "metal": {"position": 5, "edit_type": 7, "action": "wooden"}
327
+ },
328
+ "aspect_mapping": {
329
+ "dog": ["red", "standing"],
330
+ "chair": ["metal"],
331
+ "flowers": []
332
+ },
333
+ "blended_words": [
334
+ "cat,dog",
335
+ "chair,chair"
336
+ ],
337
+ "mask": "0 262144"
338
+ }
339
+ }
340
+