Papers
arxiv:2405.18407

Phased Consistency Model

Published on May 28
Β· Submitted by akhaliq on May 29
#1 Paper of the day

Abstract

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1--16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator. More details are available at https://g-u-n.github.io/projects/pcm/.

Community

would be nice to have comparison of LCM and PCM in Figure one with same cfg scale, one time 6.5 for both and one time 2.0 for both instead of LCM 2.0/2.5 and PCM 7.5/6.0. also between Fig 1 1) and Fig 1 3), why switch PCM and LCM sides? a bit confusing.

Β·
Paper author

Hi. This is exactly the first issue of LCM. It can not apply CFG larger than 2, which causes the overexpourse problem.

So far most of the examples only show PCM results in a much more blurred and airbrushed low-detail image... Seems neat for generating heavily-photoshopped-looking faces.

I see the benefits in image stability but...

There's a simple-english rewrite of the paper here - feedback from the authors is welcome! https://www.aimodels.fyi/papers/arxiv/phased-consistency-model

Unleashing the Phased Consistency Model - Efficient Image Generation Explained!

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Great work! I look forward to trying this out.

I noticed that Figure 4. says Slover, not solver in one place. I figured you'd want to know.

I would also like to point out that the negative prompt example isn't convincing.
The negative prompt says "black dog" but neither image contains a black dog. The dog generated by LCM has black sunglasses which might have been prevented by the negative prompt in the PCM.
If you intend to revise the paper, as you suggested in another comment, I would recommend finding a better example of negative prompting not working with LCM.

Β·

Thanks for the advice! Will consider your advice when revising the paper.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2405.18407 in a dataset README.md to link it from this page.

Spaces citing this paper 3

Collections including this paper 11