Papers
arxiv:2312.05107

DreaMoving: A Human Dance Video Generation Framework based on Diffusion Models

Published on Dec 8, 2023
· Featured in Daily Papers on Dec 11, 2023
Authors:
,
,
,
,
,
,
,

Abstract

In this paper, we present DreaMoving, a diffusion-based controllable video generation framework to produce high-quality customized human dance videos. Specifically, given target identity and posture sequences, DreaMoving can generate a video of the target identity dancing anywhere driven by the posture sequences. To this end, we propose a Video ControlNet for motion-controlling and a Content Guider for identity preserving. The proposed model is easy to use and can be adapted to most stylized diffusion models to generate diverse results. The project page is available at https://dreamoving.github.io/dreamoving.

Community

DALL·E 2023-10-17 13.09.56 - Photo sprite sheet of a ninja enveloped in black flames executing a sword attack. The image is evenly divided into 9 frames, each of the same size, wi.png

This comment has been hidden

Hey, I think your author Chen Shi mistakenly linked to me

Hey, I think your author Chen Shi mistakenly linked to me

Hey! Thanks for letting us know. We wiill remove the authorship from your account. : )

Interesting but please stop call this kind of mouvement dance. This is quite insulting. Call it mouvement on rythme but this is definitely not dance.

Hello, fantastic work!! Is there any plan to release the code and model to allow in-house implementation?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.05107 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.05107 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.05107 in a Space README.md to link it from this page.

Collections including this paper 20