InstantCharacter: Personalize Any Characters with a Scalable Diffusion Transformer Framework
Abstract
Current learning-based subject customization approaches, predominantly relying on U-Net architectures, suffer from limited generalization ability and compromised image quality. Meanwhile, optimization-based methods require subject-specific fine-tuning, which inevitably degrades textual controllability. To address these challenges, we propose InstantCharacter, a scalable framework for character customization built upon a foundation diffusion transformer. InstantCharacter demonstrates three fundamental advantages: first, it achieves open-domain personalization across diverse character appearances, poses, and styles while maintaining high-fidelity results. Second, the framework introduces a scalable adapter with stacked transformer encoders, which effectively processes open-domain character features and seamlessly interacts with the latent space of modern diffusion transformers. Third, to effectively train the framework, we construct a large-scale character dataset containing 10-million-level samples. The dataset is systematically organized into paired (multi-view character) and unpaired (text-image combinations) subsets. This dual-data structure enables simultaneous optimization of identity consistency and textual editability through distinct learning pathways. Qualitative experiments demonstrate the advanced capabilities of InstantCharacter in generating high-fidelity, text-controllable, and character-consistent images, setting a new benchmark for character-driven image generation. Our source code is available at https://github.com/Tencent/InstantCharacter.
Community
A new "Instant" work from InstantX Team and Hunyuan Tencent
Proj Page: https://instantcharacter.github.io/
Github Code: https://github.com/Tencent/InstantCharacter
HF Demo: https://huggingface.co/spaces/InstantX/InstantCharacter
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Personalize Anything for Free with Diffusion Transformer (2025)
- FlexIP: Dynamic Control of Preservation and Personality for Customized Image Generation (2025)
- DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability (2025)
- InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity (2025)
- SkyReels-A2: Compose Anything in Video Diffusion Transformers (2025)
- CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance (2025)
- Less-to-More Generalization: Unlocking More Controllability by In-Context Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper