Papers
arxiv:2407.02489

Magic Insert: Style-Aware Drag-and-Drop

Published on Jul 2
· Submitted by flavoredquark on Jul 3
Authors:
,
,
,
,
,

Abstract

We present Magic Insert, a method for dragging-and-dropping subjects from a user-provided image into a target image of a different style in a physically plausible manner while matching the style of the target image. This work formalizes the problem of style-aware drag-and-drop and presents a method for tackling it by addressing two sub-problems: style-aware personalization and realistic object insertion in stylized images. For style-aware personalization, our method first fine-tunes a pretrained text-to-image diffusion model using LoRA and learned text tokens on the subject image, and then infuses it with a CLIP representation of the target style. For object insertion, we use Bootstrapped Domain Adaption to adapt a domain-specific photorealistic object insertion model to the domain of diverse artistic styles. Overall, the method significantly outperforms traditional approaches such as inpainting. Finally, we present a dataset, SubjectPlop, to facilitate evaluation and future progress in this area. Project page: https://magicinsert.github.io/

Community

Paper author Paper submitter
•
edited Jul 3

With friends at Google we announce 💜 Magic Insert 💜 - a generative AI method for realistically inserting subjects from one image into another while adapting to the target image's style. There is a ✨ demo ✨ that you can access on the desktop version of the website. We're excited by the options Magic Insert opens up for artistic creation, content creation and for the overall expansion of GenAI controllability.

web: https://magicinsert.github.io/
paper: https://arxiv.org/abs/2407.02489
demo: https://magicinsert.github.io/demo.html

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.02489 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.02489 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.02489 in a Space README.md to link it from this page.

Collections including this paper 8