Papers
arxiv:2406.10210

Make It Count: Text-to-Image Generation with an Accurate Number of Objects

Published on Jun 14
· Submitted by Royir on Jun 17
#2 Paper of the day

Abstract

Despite the unprecedented success of text-to-image diffusion models, controlling the number of depicted objects using text is surprisingly hard. This is important for various applications from technical documents, to children's books to illustrating cooking recipes. Generating object-correct counts is fundamentally challenging because the generative model needs to keep a sense of separate identity for every instance of the object, even if several objects look identical or overlap, and then carry out a global computation implicitly during generation. It is still unknown if such representations exist. To address count-correct generation, we first identify features within the diffusion model that can carry the object identity information. We then use them to separate and count instances of objects during the denoising process and detect over-generation and under-generation. We fix the latter by training a model that predicts both the shape and location of a missing object, based on the layout of existing ones, and show how it can be used to guide denoising with correct object count. Our approach, CountGen, does not depend on external source to determine object layout, but rather uses the prior from the diffusion model itself, creating prompt-dependent and seed-dependent layouts. Evaluated on two benchmark datasets, we find that CountGen strongly outperforms the count-accuracy of existing baselines.

Community

Paper author Paper submitter
edited 2 days ago

Despite the unprecedented success of text-to-image diffusion models, controlling
the number of depicted objects using text is surprisingly hard. This is important for
various applications from technical documents, to children’s books to illustrating
cooking recipes. Generating object-correct counts is fundamentally challenging
because the generative model needs to keep a sense of separate identity for every
instance of the object, even if several objects look identical or overlap, and then
carry out a global computation implicitly during generation. It is still unknown if
such representations exist. To address count-correct generation, we first identify
features within the diffusion model that can carry the object identity information.
We then use them to separate and count instances of objects during the denoising
process and detect over-generation and under-generation. We fix the latter by
training a model that predicts both the shape and location of a missing object, based
on the layout of existing ones, and show how it can be used to guide denoising
with correct object count. Our approach, CountGen, does not depend on external
source to determine object layout, but rather uses the prior from the diffusion
model itself, creating prompt-dependent and seed-dependent layouts. Evaluated on
two benchmark datasets, we find that CountGen strongly outperforms the count accuracy of existing baselines.

Project Page: https://make-it-count-paper.github.io/

figure_1 (1)_page-0001 (1).jpg

The paper is impressive, but are there any good results for fingers?

I heard a story of another model makers (I think it was Sber's Kandinsky) adding loss for fingers count and in turn it led to fingers growing out in unexpected places just to satisfy the number. I wonder if that behavior is present in your approach too

·

In the four ninja turtles, it kind of has the Donatello-Raphael growing out of the other Donatello's back, there's no legs on the ground there. I'd say that does kind of match the "unexpected places to satisfy the number" issue. lol

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.10210 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.10210 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.10210 in a Space README.md to link it from this page.

Collections including this paper 5