Papers
arxiv:2410.14672

BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities

Published on Oct 18
· Submitted by haoosz on Oct 21
Authors:
,
,
,
,
,

Abstract

We introduce BiGR, a novel conditional image generation model using compact binary latent codes for generative training, focusing on enhancing both generation and representation capabilities. BiGR is the first conditional generative model that unifies generation and discrimination within the same framework. BiGR features a binary tokenizer, a masked modeling mechanism, and a binary transcoder for binary code prediction. Additionally, we introduce a novel entropy-ordered sampling method to enable efficient image generation. Extensive experiments validate BiGR's superior performance in generation quality, as measured by FID-50k, and representation capabilities, as evidenced by linear-probe accuracy. Moreover, BiGR showcases zero-shot generalization across various vision tasks, enabling applications such as image inpainting, outpainting, editing, interpolation, and enrichment, without the need for structural modifications. Our findings suggest that BiGR unifies generative and discriminative tasks effectively, paving the way for further advancements in the field.

Community

Paper author Paper submitter

Can conditional generative models achieve strong visual representations, inherently?

In this paper, we introduce BiGR, a conditional image generation model that unifies generative and discriminative tasks. See more details in:

Paper: https://arxiv.org/abs/2410.14672
Project: http://haoosz.github.io/BiGR/
Code: https://github.com/haoosz/BiGR

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.14672 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.14672 in a Space README.md to link it from this page.

Collections including this paper 4