metadata
license: apache-2.0
This is a Large (780M parameter) Transformer trained for 100k steps on arrival-time encoded music from the Lakh MIDI dataset. This model was trained with anticipation.
References for the Anticipatory Music Transformer
The full model card is available here.
Code for using this model is available on GitHub.
See the accompanying blog post for additional discussion of this model.