--- license: mit datasets: - imagenet-1k language: - en - zh --- # Model Card for VAR (Visual AutoRegressive) Transformers πŸ”₯ [![arXiv](https://img.shields.io/badge/arXiv%20papr-2404.02905-b31b1b.svg)](https://arxiv.org/abs/2404.02905)[![demo platform](https://img.shields.io/badge/Play%20with%20VAR%21-VAR%20demo%20platform-lightblue)](https://var.vision/demo) VAR is a new visual generation framework that makes GPT-style models surpass diffusion models **for the first time**πŸš€, and exhibits clear power-law Scaling LawsπŸ“ˆ like large language models (LLMs).

VAR redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction".

This repo is used for hosting VAR's checkpoints. For more details or tutorials see https://github.com/FoundationVision/VAR.