--- license: mit datasets: - imagenet-1k language: - en - zh --- # Model Card for VAR (Visual AutoRegressive) Transformers π₯ [](https://arxiv.org/abs/2404.02905)[](https://var.vision/demo) VAR is a new visual generation framework that makes GPT-style models surpass diffusion models **for the first time**π, and exhibits clear power-law Scaling Lawsπ like large language models (LLMs).
VAR redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction".
This repo is used for hosting VAR's checkpoints. For more details or tutorials see https://github.com/FoundationVision/VAR.