Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
wxDai 
posted an update May 2
Post
1474
🔥Motion Latent Consistency Model🔥

Introducing MotionLCM💃, controlling and generating a motion in milliseconds!

Huggingface Space:
wxDai/MotionLCM
Huggingface Paper:
MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model (2404.19759)

Project page: https://dai-wenxun.github.io/MotionLCM-page/
Paper: https://arxiv.org/pdf/2404.19759.pdf
Code: https://github.com/Dai-Wenxun/MotionLCM
video: https://www.youtube.com/watch?v=BhrGmJYaRE4

MotionLCM supports inference pipelines of 1-4 steps, with almost no difference in effectiveness between 1 and 4 steps. Generating approximately 200 frames of motion only takes about 30ms, which averages to approximately 6k fps per frame.

Our MotionLCM can achieve high-quality text-to-motion and precise motion control results (both sparse and dense conditions) in ∼30 ms.

We integrated a control module into the diffusion of the latent space, named Motion ControlNet, to achieve controllable motion generation. Our control algorithm is approximately 1,000 times faster than the best-performing baseline, with comparable quality.

Very impressive! It's great quality, and incredibly fast even on the CPU.

·

Thanks!