Differences between AnimateLCM-t2v, AnimateLCM-SVD-xt and AnimateLCM-I2V?

#1
by echobingo147 - opened

What are the differences between these three models, and do you have examples for the specific usage of each model?

echobingo147 changed discussion title from Differences between AnimateLCM-t2v, AnimateLCM-SVD-xt and ANomateLCM-I2V? to Differences between AnimateLCM-t2v, AnimateLCM-SVD-xt and AnimateLCM-I2V?

Im still figuring things out myself, so this is just till someone who knows better comes along. Also, this is how to do it in comfy.

AnimateLCM-svd-xt -> You'd use these in SVD workflows. Simply copy to your main checkpoints folder and use in place of SVD models by SAI. for example, in comfy you'd load this in the 'img only checkpoint loader'.

AnimateLCM-t2v AND AnimateLCM-i2v -> You'd use these in animatediff workflows.

AnimateLCM-i2v -> This model lets you condition with an image. Again with comfy and anim-diff evolved, Instead of the usual "Apply AnimateDiff model" use an "ADE_ApplyAnimateLCMI2VModel" node. You connect the output to the evolved sampling node.
When you create that node you'll see a motion model input on top. create an "ADE_LoadAnimateLCMI2VModel" and load the i2v checkpoint (2gb).the 2nd is a purple latent input named "ref_latent". Use another ADE evolved node, the "Scale ref and VAE encode" node and plug in an image here. The lora obviously goes to the "Lora load model only" as usual.

The t2v flow is same as with other motion models.

Sign up or log in to comment