Papers
arxiv:2407.03471

Learning Action and Reasoning-Centric Image Editing from Videos and Simulations

Published on Jul 3
· Submitted by xhluca on Jul 9
Authors:
,
,

Abstract

An image editing model should be able to perform diverse edits, ranging from object replacement, changing attributes or style, to performing actions or movement, which require many forms of reasoning. Current general instruction-guided editing models have significant shortcomings with action and reasoning-centric edits. Object, attribute or stylistic changes can be learned from visually static datasets. On the other hand, high-quality data for action and reasoning-centric edits is scarce and has to come from entirely different sources that cover e.g. physical dynamics, temporality and spatial reasoning. To this end, we meticulously curate the AURORA Dataset (Action-Reasoning-Object-Attribute), a collection of high-quality training data, human-annotated and curated from videos and simulation engines. We focus on a key aspect of quality training data: triplets (source image, prompt, target image) contain a single meaningful visual change described by the prompt, i.e., truly minimal changes between source and target images. To demonstrate the value of our dataset, we evaluate an AURORA-finetuned model on a new expert-curated benchmark (AURORA-Bench) covering 8 diverse editing tasks. Our model significantly outperforms previous editing models as judged by human raters. For automatic evaluations, we find important flaws in previous metrics and caution their use for semantically hard editing tasks. Instead, we propose a new automatic metric that focuses on discriminative understanding. We hope that our efforts : (1) curating a quality training dataset and an evaluation benchmark, (2) developing critical evaluations, and (3) releasing a state-of-the-art model, will fuel further progress on general image editing.

Community

Paper submitter

AURORA is a general image editing model + high-quality data that improves where prev work fails the most: Performing action or movement edits, i.e. a kind of world model setup.

First, the authors find that popular instruction-guided image editing models (MagicBrush, Pix2Pix, MGIE, …) are surprisingly bad outside of the straightforward “inpainting” paradigm of editing (add/remove, change attribute): They rarely succeed when asked to perform an "action edit".

Action edits (moving things, state changes or human actions) could be learned from videos! But it turns out that videos in the wild are not suitable sources for high-quality image pairs 🙁 They are diverse and complex, but also noisy with too few/many or meaningless changes.

Instead, they meticulously curate and crowdsource high-quality image pairs+text that depict a meaningful action/change from selected video and simulation sources: The AURORA dataset (289K examples)

Exciting release! Looking forward to exploring the capabilities of AURORA in action and reasoning-centric image editing 🤗

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 2

Spaces citing this paper 1

Collections including this paper 1