layoutbench / README.md
j-min's picture
Update README.md
3fb9bbb verified
metadata
license: mit
task_categories:
  - text-to-image
language:
  - en
configs:
  - config_name: default
    data_files:
      - split: number_few
        path:
          - number/images/*_0-2_*.png
      - split: number_many
        path:
          - number/images/*_11-13_*.png
          - number/images/*_14-16_*.png
      - split: position_boundary
        path:
          - position/images/*_position_boundary_*.png
      - split: position_center
        path:
          - position/images/*_position_center_*.png
      - split: shape_horizontal
        path:
          - shape/images/*_H2W1_*.png
          - shape/images/*_H3W1_*.png
      - split: shape_vertical
        path:
          - shape/images/*_H1W2_*.png
          - shape/images/*_H1W3_*.png
      - split: size_tiny
        path:
          - size/images/*size_020_*.png
      - split: size_large
        path:
          - size/images/*size_090_*.png
          - size/images/*size_110_*.png
          - size/images/*size_130_*.png
          - size/images/*size_150_*.png
pretty_name: LayoutBench

LayoutBench

Release of LayoutBench dataset from Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation (CVPR 2024 Workshop)

See also LayoutBench-COCO for zero-shot evaluation on OOD layouts with real objects.

[Project Page] [Paper]

Authors: Jaemin Cho, Linjie Li, Zhengyuan Yang, Zhe Gan, Lijuan Wang, Mohit Bansal

Summary

LayoutBench is a diagnostic benchmark that examines layout-guided image generation models on arbitrary, unseen layouts. LayoutBench consists of 8K images with 1K images per task:

  • number_few
  • number_many
  • position_center
  • position_boundary
  • size_tiny
  • size_large
  • shape_horizontal
  • shape_vertical.

We assume that the layout-to-image generation models were trained on CLEVR dataset (in-distribution). Then we evaluate the models on LayoutBench (out-of-distribution). Below we compare CLEVR and LayoutBench examples.

CLEVR vs LayoutBench

How was it created?

To disentangle spatial control from other aspects in image generation, such as generating diverse objects, LayoutBench keeps the object configurations of CLEVR whose objects have 3 shapes, 2 materials, and 8 colors (48 combinations in total), and changes the spatial layouts. Images in LayoutBench are collected in two steps:

  • (1) sample scenes for each skill, where a scene is defined by the objects and their positions
  • (2) render images from the scenes with Blender simulator (2.93.13) and obtain bounding box layouts.

Skill Details

We measure 4 spatial control skills (number, position, size, shape), where each skill consists of 2 OOD layout splits, i.e., in total 8 tasks = 4 skills x 2 splits. In total, we collect 8K images for LayoutBench evaluation, with 1K images per task.

Skill 1: Number.

This skill involves generating images with a specified number of objects. In contrast to the ID CLEVR images with 3∼10 objects, we evaluate models on two OOD splits:

  • (1) few: images with 0∼2 objects
  • (2) many: images with 11∼16 objects.

Skill 2: Position.

This skill involves generating images with objects placed at specific positions. Different from ID CLEVR images featuring evenly distributed object position without much occlusion between objects, we design two OOD splits:

  • (1) center: objects are placed at the center, thus leading to more occlusions
  • (2) boundary: objects are only placed on boundaries (top/bottom/left/right).

Skill 3: Size.

This skill involves generating images with objects of a specified size. We construct two OOD splits:

  • (1) tiny: objects with scale 2
  • (2) large: objects with scale {9, 11, 13, 15}.

In comparison, the objects in CLEVR images have only two scales {3.5, 7}. We use 3∼5 objects for this skill, as we find that using more than this number of large objects can often obstruct the object visibilities.

Skill 4: Shape.

This skill involves generating images with objects of a specified aspect ratio. As the objects in CLEVR images mostly have square aspect ratios, we evaluate models with two OOD splits:

  • (1) horizontal: objects in which one of the horizontal (x/y) axes are 2 or 3 times longer than the other axis, leading to object bounding boxes with an aspect ratio (width:height) of 2:1 or 3:1
  • (2) vertical: objects whose vertical (z) axis are 2 or 3 times longer than horizontal (x/y) axes, resulting in object bounding boxes with an aspect ratio of 1:2 or 1:3. We use 3∼5 objects for this skill, as we find that using more than this number of objects can often obstruct the object visibilities.

Use of LayoutBench

1) Train your model on CLEVR dataset

2) Evaluate your model on LayoutBench main splits (4 skills x 2 splits = 8 tasks)

Eval overview

We test the OOD layout skills of layout-guided image generation models trained on CLEVR (ID) dataset. First, we generate images with LayoutBench (OOD) layouts. Then, we detect the objects from the generated images, and calculate the layout accuracy in average precision (AP), with an object detector. Please see https://github.com/j-min/LayoutBench for evaluation guideline with pretrained DETR.

3) (optional) Fine-grained evaluation

As described in Sec 5.3 in the paper, we also provide fine-grained evaluation splits for each skill. Specifically, we divide the 4 skills into more fine-grained splits to cover both in-distribution (ID; CLEVR configurations) and out-of-distribution (OOD; LayoutBench configurations) examples. We sample 200 images for each split and report layout accuracy.

Dataset File Structure

For each skill, we provide the following files:

  • scene files: created for image rendering with Blender simulator. Each scene file includes the object configurations and their positions.
  • images: rendered images from the scenes.
  • scene files in COCO format: scene files converted into COCO format for evaluation.

The dataset file structure is as follows:

number/
    # layout metadata for main splits (1K each)
    scenes_number_few.json
    scenes_number_many.json

    # (optional - for fine-grained evalutation - see Sec 5.3 in the paper for more details)
    # 200 scenes for each sub-split
    # (0-2 / 11-13 / 14-16 are parts of few/many, and 3-5 / 6-8 / 9-10 were additionally generated as CLEVR dataset has 3-10 objects, so there are 2 splits x 1000 images + 200 x 3 extra sub-splits = 2600 images in total)
    scenes_number_0-2_200.json
    scenes_number_3-5_200.json
    ....
    scenes_number_14-16_200.json

    scenes.json # the file that includes the whole scenes

    # actual images
    images/ 
        LayoutBench_val_number_0-2_000000.png
        ...
        LayoutBench_val_number_14-16_002599.png

    # scene files converted into COCO format for evaluation
    coco/
        # for main splits
        scenes_number_few_coco.json
        scenes_number_many_coco.json

        # for fine-grained analysis
        scenes_number_0-2_200_coco.json
        scenes_number_14-16_200_coco.json

# same structure for other skills

position/

shape/

size/

Citation

@inproceedings{Cho2024LayoutBench,
  author    = {Jaemin Cho and Linjie Li and Zhengyuan Yang and Zhe Gan and Lijuan Wang and Mohit Bansal},
  title     = {Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation},
  booktitle = {The First Workshop on the Evaluation of Generative Foundation Models},
  year      = {2024},
}