This model is the Taming VQGAN tokenizer with a vocabulary size of 10bits converted into a format for the MaskBit codebase. It uses a downsampling factor of 16 and is trained on ImageNet for images of resolution 256.
You can find more details on the VQGAN in the original repository or paper. All credits for this model belong to Patrick Esser, Robin Rombach and Björn Ommer.
Dataset used to train markweber/taming_vqgan
Evaluation results
- rFID on ILSVRC/imagenet-1kself-reported7.960
- InceptionScore on ILSVRC/imagenet-1kself-reported115.900
- LPIPS on ILSVRC/imagenet-1kself-reported0.306
- PSNR on ILSVRC/imagenet-1kself-reported20.200
- SSIM on ILSVRC/imagenet-1kself-reported0.520
- CodebookUsage on ILSVRC/imagenet-1kself-reported0.445