OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting
Abstract
Diffusion-based generative models have revolutionized object-oriented image editing, yet their deployment in realistic object removal and insertion remains hampered by challenges such as the intricate interplay of physical effects and insufficient paired training data. In this work, we introduce OmniPaint, a unified framework that re-conceptualizes object removal and insertion as interdependent processes rather than isolated tasks. Leveraging a pre-trained diffusion prior along with a progressive training pipeline comprising initial paired sample optimization and subsequent large-scale unpaired refinement via CycleFlow, OmniPaint achieves precise foreground elimination and seamless object insertion while faithfully preserving scene geometry and intrinsic properties. Furthermore, our novel CFD metric offers a robust, reference-free evaluation of context consistency and object hallucination, establishing a new benchmark for high-fidelity image editing. Project page: https://yeates.github.io/OmniPaint-Page/
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ObjectMover: Generative Object Movement with Video Prior (2025)
- MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input (2025)
- PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data (2025)
- IMFine: 3D Inpainting via Geometry-guided Multi-view Refinement (2025)
- Recovering Partially Corrupted Major Objects through Tri-modality Based Image Completion (2025)
- VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control (2025)
- Get In Video: Add Anything You Want to the Video (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper