metadata
size_categories:
- 1M<n<10M
viewer: false
license: apache-2.0
Tiny Cosmos-Tokenized Imagenet
Similar fashion to Simo's Imagenet.int8, here we provide Cosmos-tokenized imagenet for rapid prototyping. Noticeably, the discrete tokenizer is able to compress entire imagenet into shocking 2.45 GB of data!
How to use
This time, we dumped it all on simple pytorch safetensor format.
import torch
import torch.nn as nn
from safetensors.torch import safe_open
# for continuous tokenizer
with safe_open("tokenize_dataset/imagenet_ci8x8.safetensors", framework="pt") as f:
data = f.get_tensor("latents") * 16.0 / 255.0
labels = f.get_tensor("labels")
print(data.shape) # 1281167, 16, 32, 32
print(labels.shape) # 1281167
To decode, you would need to install cosmos tokenizer.
git clone https://github.com/NVIDIA/Cosmos-Tokenizer.git
cd Cosmos-Tokenizer
apt-get install -y ffmpeg
pip install -e .
And decode using either "Cosmos-Tokenizer-CI8x8"
or "Cosmos-Tokenizer-DI8x8"
IMPORTANT
- For continuous token, we've quantized & normalized to int8 format. Thus, you need to multiply 16.0 / 255.0
- For discrete token, saved format is int16. To use it properly just do uint16. Example below:
model_name = "Cosmos-Tokenizer-CI8x8" if is_continuous else "Cosmos-Tokenizer-DI8x8"
decoder = ImageTokenizer(
checkpoint_dec=f"pretrained_ckpts/{model_name}/decoder.jit"
).to(device)
with safe_open("imagenet_ci8x8.safetensors", framework="pt") as f:
if tokenizer_type == "continuous":
data = f.get_tensor("latents").to(torch.bfloat16) * 16.0 / 255.0
else:
data = f.get_tensor("indices").to(torch.uint16)
labels = f.get_tensor("labels")
data = data[:1]
if is_continuous:
data = data.reshape(1, 16, 32, 32).to(device)
else:
# For discrete tokenizer, reshape to [1, 32, 32]
data = data.reshape(1, 32, 32).to(device).long()
# Decode the image
with torch.no_grad():
reconstructed = decoder.decode(data)
img = (
((reconstructed[0].cpu().float() + 1) * 127.5).clamp(0, 255).to(torch.uint8)
)
img = img.permute(1, 2, 0).numpy()
img = Image.fromarray(img)