commaVQ - GPT2M

A GPT2M model trained on a larger version of the commaVQ dataset.

This model is able to generate driving video unconditionally.

Below is an example of 5 seconds of imagined video using GPT2M.

Downloads last month
26
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support unconditional-image-generation models for transformers library.

Dataset used to train commaai/commavq-gpt2m

Space using commaai/commavq-gpt2m 1