Edit model card
Configuration Parsing Warning: In config.json: "architectures" must be an array

This is a Small (112M parameter) Transformer trained for 100k steps on interarrival-time encoded music from the Lakh MIDI dataset.

References for the Anticipatory Music Transformer

The Anticipatory Music Transformer paper is available on ArXiv.

The full model card is available here.

Code for using this model is available on GitHub.

See the accompanying blog post for additional discussion of this model.

Downloads last month
7
Inference API
Unable to determine this model’s pipeline type. Check the docs .