EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling

Arxiv: https://arxiv.org/abs/2502.09509

EQ-VAE regularizes the latent space of pretrained autoencoders by enforcing equivariance under scaling and rotation transformations.


Model Description

This model is a regularized version of SD-VAE. We finetune it with EQ-VAE regularization for 5 epochs on OpenImages.

Model Usage

  1. Loading the Model
    You can load the model from the Hugging Face Hub:
    from transformers import AutoencoderKL
    model = AutoencoderKL.from_pretrained("zelaki/eq-vae")
    

Metrics

Reconstruction performance of eq-vae-ema on Imagenet Validation Set.

Metric Score
FID 0.82
PSNR 25.95
LPIPS 0.141
SSIM 0.72

Downloads last month
168
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.