deepanwayx commited on
Commit
1c43526
β€’
1 Parent(s): 0a857f3

upload files

Browse files
README.md CHANGED
@@ -1,3 +1,112 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - amaai-lab/MusicBench
5
+ tags:
6
+ - music
7
  ---
8
+
9
+ <div align="center">
10
+
11
+ # Mustango: Toward Controllable Text-to-Music Generation
12
+
13
+ [Demo](https://replicate.com/declare-lab/mustango) | [Model](https://huggingface.co/declare-lab/mustango) | [Website and Examples](https://amaai-lab.github.io/mustango/) | [Paper](https://arxiv.org/abs/2311.08355) | [Dataset](https://huggingface.co/datasets/amaai-lab/MusicBench)
14
+
15
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/declare-lab/mustango)
16
+ </div>
17
+
18
+ Meet Mustango, an exciting addition to the vibrant landscape of Multimodal Large Language Models designed for controlled music generation. Mustango leverages Latent Diffusion Model (LDM), Flan-T5, and musical features to do the magic!
19
+
20
+ πŸ”₯ Live demo available on [Replicate](https://replicate.com/declare-lab/mustango) and [HuggingFace](https://huggingface.co/spaces/declare-lab/mustango).
21
+
22
+ <div align="center">
23
+ <img src="mustango.jpg" width="500"/>
24
+ </div>
25
+
26
+
27
+ ## Quickstart Guide
28
+
29
+ Generate music from a text prompt:
30
+
31
+ ```python
32
+ import IPython
33
+ import soundfile as sf
34
+ from mustango import Mustango
35
+
36
+ model = Mustango("declare-lab/mustango")
37
+
38
+ prompt = "This is a new age piece. There is a flute playing the main melody with a lot of staccato notes. The rhythmic background consists of a medium tempo electronic drum beat with percussive elements all over the spectrum. There is a playful atmosphere to the piece. This piece can be used in the soundtrack of a children's TV show or an advertisement jingle."
39
+
40
+ music = model.generate(prompt)
41
+ sf.write(f"{prompt}.wav", audio, samplerate=16000)
42
+ IPython.display.Audio(data=audio, rate=16000)
43
+ ```
44
+
45
+ ## Installation
46
+
47
+ ```bash
48
+ git clone https://github.com/AMAAI-Lab/mustango
49
+ cd mustango
50
+ pip install -r requirements.txt
51
+ cd diffusers
52
+ pip install -e .
53
+ ```
54
+
55
+ ## Datasets
56
+
57
+ The [MusicBench](https://huggingface.co/datasets/amaai-lab/MusicBench) dataset contains 52k music fragments with a rich music-specific text caption.
58
+ ## Subjective Evaluation by Expert Listeners
59
+
60
+ | **Model** | **Dataset** | **Pre-trained** | **Overall Match** ↑ | **Chord Match** ↑ | **Tempo Match** ↑ | **Audio Quality** ↑ | **Musicality** ↑ | **Rhythmic Presence and Stability** ↑ | **Harmony and Consonance** ↑ |
61
+ |-----------|-------------|:-----------------:|:-----------:|:-----------:|:-----------:|:----------:|:----------:|:----------:|:----------:|
62
+ | Tango | MusicCaps | βœ“ | 4.35 | 2.75 | 3.88 | 3.35 | 2.83 | 3.95 | 3.84 |
63
+ | Tango | MusicBench | βœ“ | 4.91 | 3.61 | 3.86 | 3.88 | 3.54 | 4.01 | 4.34 |
64
+ | Mustango | MusicBench | βœ“ | 5.49 | 5.76 | 4.98 | 4.30 | 4.28 | 4.65 | 5.18 |
65
+ | Mustango | MusicBench | βœ— | 5.75 | 6.06 | 5.11 | 4.80 | 4.80 | 4.75 | 5.59 |
66
+
67
+
68
+
69
+
70
+ ## Training
71
+
72
+ We use the `accelerate` package from Hugging Face for multi-gpu training. Run `accelerate config` from terminal and set up your run configuration by the answering the questions asked.
73
+
74
+ You can now train **Mustango** on the MusicBench dataset using:
75
+
76
+ ```bash
77
+ accelerate launch train.py \
78
+ --text_encoder_name="google/flan-t5-large" \
79
+ --scheduler_name="stabilityai/stable-diffusion-2-1" \
80
+ --unet_model_config="configs/diffusion_model_config_munet.json" \
81
+ --model_type Mustango --freeze_text_encoder --uncondition_all --uncondition_single \
82
+ --drop_sentences --random_pick_text_column --snr_gamma 5 \
83
+ ```
84
+
85
+ The `--model_type` flag allows to choose either Mustango, or Tango to be trained with the same code. However, do note that you also need to change `--unet_model_config` to the relevant config: diffusion_model_config_munet for Mustango; diffusion_model_config for Tango.
86
+
87
+ The arguments `--uncondition_all`, `--uncondition_single`, `--drop_sentences` control the dropout functions as per Section 5.2 in our paper. The argument of `--random_pick_text_column` allows to randomly pick between two input text prompts - in the case of MusicBench, we pick between ChatGPT rephrased captions and original enhanced MusicCaps prompts, as depicted in Figure 1 in our paper.
88
+
89
+ Recommended training time from scratch on MusicBench is at least 40 epochs.
90
+
91
+
92
+ ## Model Zoo
93
+
94
+ We have released the following models:
95
+
96
+ Mustango Pretrained: https://huggingface.co/declare-lab/mustango-pretrained
97
+
98
+
99
+ Mustango: https://huggingface.co/declare-lab/mustango
100
+
101
+
102
+ ## Citation
103
+ Please consider citing the following article if you found our work useful:
104
+ ```
105
+ @misc{melechovsky2023mustango,
106
+ title={Mustango: Toward Controllable Text-to-Music Generation},
107
+ author={Jan Melechovsky and Zixun Guo and Deepanway Ghosal and Navonil Majumder and Dorien Herremans and Soujanya Poria},
108
+ year={2023},
109
+ eprint={2311.08355},
110
+ archivePrefix={arXiv},
111
+ }
112
+ ```
beats/microsoft-deberta-v3-large.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1498d38c9cff8fb278b0d2e7a40a7b01c90e78caf9edba137258a35987849e9
3
+ size 1744651736
chords/flan-t5-large.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d73492576022d13ac89e2c20c2e08c87a87f0edc164d5c883c3c7e32024a8e8
3
+ size 3132793669
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"text_encoder_name": "google/flan-t5-large", "scheduler_name": "stabilityai/stable-diffusion-2-1", "unet_model_name": null, "unet_model_config_path": "configs/music_diffusion_model_config.json", "snr_gamma": 5.0}
configs/main_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"text_encoder_name": "google/flan-t5-large", "scheduler_name": "stabilityai/stable-diffusion-2-1", "unet_model_name": null, "unet_model_config_path": "configs/music_diffusion_model_config.json", "snr_gamma": 5.0}
configs/music_diffusion_model_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.10.0.dev0",
4
+ "act_fn": "silu",
5
+ "attention_head_dim": [
6
+ 5,
7
+ 10,
8
+ 20,
9
+ 20
10
+ ],
11
+ "block_out_channels": [
12
+ 320,
13
+ 640,
14
+ 1280,
15
+ 1280
16
+ ],
17
+ "center_input_sample": false,
18
+ "cross_attention_dim": 1024,
19
+ "down_block_types": [
20
+ "CrossAttnDownBlock2DMusic",
21
+ "CrossAttnDownBlock2DMusic",
22
+ "CrossAttnDownBlock2DMusic",
23
+ "DownBlock2D"
24
+ ],
25
+ "downsample_padding": 1,
26
+ "dual_cross_attention": false,
27
+ "flip_sin_to_cos": true,
28
+ "freq_shift": 0,
29
+ "in_channels": 8,
30
+ "layers_per_block": 2,
31
+ "mid_block_type": "UNetMidBlock2DCrossAttnMusic",
32
+ "mid_block_scale_factor": 1,
33
+ "norm_eps": 1e-05,
34
+ "norm_num_groups": 32,
35
+ "num_class_embeds": null,
36
+ "only_cross_attention": false,
37
+ "out_channels": 8,
38
+ "sample_size": [32, 2],
39
+ "up_block_types": [
40
+ "UpBlock2D",
41
+ "CrossAttnUpBlock2DMusic",
42
+ "CrossAttnUpBlock2DMusic",
43
+ "CrossAttnUpBlock2DMusic"
44
+ ],
45
+ "use_linear_projection": true,
46
+ "upcast_attention": true
47
+ }
configs/stft_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"filter_length": 1024, "hop_length": 160, "win_length": 1024, "n_mel_channels": 64, "sampling_rate": 16000, "mel_fmin": 0, "mel_fmax": 8000}
configs/vae_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"image_key": "fbank", "subband": 1, "embed_dim": 8, "time_shuffle": 1, "ddconfig": {"double_z": true, "z_channels": 8, "resolution": 256, "downsample_time": false, "in_channels": 1, "out_ch": 1, "ch": 128, "ch_mult": [1, 2, 4], "num_res_blocks": 2, "attn_resolutions": [], "dropout": 0.0}, "scale_factor": 0.9227914214134216}
ldm/pytorch_model_ldm.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7cad0ff3dd6b346898b12b2e3627e8262684646ff82f59d0a69891e42fcaa66
3
+ size 7051656406
stft/pytorch_model_stft.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8674a4cc9755fafa48350bfa3412cf9b9a0d357d18289dbfd86f0fb34e1ca4db
3
+ size 8537803
vae/pytorch_model_vae.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d49e1881f38bd4f4fcaaf1c56686c02fb15f75e80dec5f773ae235b2cf1b61b
3
+ size 442713669