Unfortunately, it's not the real thing

#5
by Rob2 - opened

It would be nice if it worked. But for now, no matter what I write in it, the result is the same. The character shown in the picture turns away. There is Stable Video Diffusion, which is a big favorite, the only thing I regretted there was that it is not possible to specify what I want. You can do it here, but it doesn't do that :(

You just need to adapt to this model. I2vgen is great and in my opinion the best one at this moment. And in this ai video race with Sora, SVD and others I would go all-in for Alibaba Research Group. 100% of success is to use proper pictures. This model is a bit hard to prompt, you just need to relax and type "the'' if you are not sure what you want to see. This model is smart and it will understand what is on your picture and will animate it accordingly. Do not use bing/dalle images without preprocessing - they are too grainy-noisy and i2vgen will turn them into dust because that`s how it sees dispersed pixels. Same with images with a lot of dust in the air. Reduce those particles in Photoshop before using it here. Your images won't be animated well if too stylized or if the character is made of shapes that are a bit separated from each other - i2vgen will try to animate those shapes separately. When I animate one scene - I prepare different versions of that image. Sometimes model just hate your picture and you can't do anything about that.  Anyway all present ai video models are good at something specific. If you want full control - you use gen2. If you want funny jumpy things - go to pika on discord. And If you need state of art natural animation with good physics and flow - I2vgen is your choice. Always good to combine techniques and models.

Sign up or log in to comment