Configuration Parsing
Warning:
In config.json: "architectures" must be an array
This is a Medium (360M parameter) Transformer trained for 200k steps on arrival-time encoded music from the Lakh MIDI dataset. This model was trained with anticipation.
References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on ArXiv.
The full model card is available here.
Code for using this model is available on GitHub.
See the accompanying blog post for additional discussion of this model.
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.