Visualizing Latent space

#1
by Allayte - opened

Is there a way to visualize the latent space in between the encoder and decoder? I see there are reconstruction and interpolation functionalities but couldn't seem to find a way to do so (using beta-vae as well).

Hey!
Yes you can simply do the following:

>>> model.encoder(your_data).embedding

E.g.

>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub("clementchadebec/reproduced_beta_tc_vae", allow_pickle=True)
Downloading config file ...
Downloading BetaTCVAE files for rebuilding...
Successfully downloaded BetaTCVAE model!
>>> import torch
>>> x = torch.randn(3, 1, 64, 64)
>>> emb = model.encoder(x).embedding
>>> emb.shape
torch.Size([3, 10])

Awesome. Thank you! I've added the ability to view it such like the previous examples here!

emb = trained_model.encoder(X_test.float()[:25].to(device).detach().cpu()).embedding
# emb.shape = torch.Size([25, 16])

embeddings = emb.reshape(emb.shape[0],1,4,4)
# embeddings.shape = torch.Size([25, 1, 4, 4])

fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(5, 5))

with torch.no_grad():
  for i in range(5):
      for j in range(5):
          axes[i][j].imshow(embeddings[i*5 + j].cpu().squeeze(0), cmap='gray')
          axes[i][j].axis('off')
  plt.tight_layout(pad=0.2)
Allayte changed discussion status to closed

Thank you so much for this!

Sign up or log in to comment