sanchit-gandhi HF staff commited on
Commit
dc41f51
1 Parent(s): 7235210
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,8 +7,8 @@ license: cc-by-nc-4.0
7
  # MusicGen - Small - 300M
8
 
9
  MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
10
- It is is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
11
- Unlike existing methods, like MusicLM, MusicGen doesn't not require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
12
  By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
13
 
14
  MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
 
7
  # MusicGen - Small - 300M
8
 
9
  MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
10
+ It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
11
+ Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
12
  By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
13
 
14
  MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.