File size: 1,054 Bytes
5643405
 
 
e206a88
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
license: apache-2.0
---

This is a Large (780M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/), [MetaMidi dataset](https://github.com/jeffreyjohnens/MetaMIDIDataset), and transcripts of the [FMA audio dataset](https://github.com/mdeff/fma) and 450k commercial music records (transcribed using Google Magenta's [ISMIR 2022](https://ismir2022program.ismir.net/poster_287.html) music transcription model). This model was trained with anticipation.

# References for the Anticipatory Music Transformer

The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).

The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).

Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).

See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of anticipatory models.