camenduru commited on
Commit
4aae1b5
1 Parent(s): d114f92

thanks to Alpha-VLLM ❤

Browse files
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-to-audio
5
+ - music
6
+ library_name: transformers
7
+ ---
8
+
9
+ # Lumina Text-to-Music
10
+
11
+ We will provide our implementation and pretrained models as open source in this repository recently.
12
+
13
+ - Generation Model: Flag-DiT
14
+ - Text Encoder: [FLAN-T5-Large](https://huggingface.co/google/flan-t5-large)
15
+ - VAE: Make an Audio 2, finetuned from [Makee an Audio](https://github.com/Text-to-Audio/Make-An-Audio)
16
+ - Decoder: [Vocoder](https://github.com/NVIDIA/BigVGAN)
17
+
18
+ ## 📰 News
19
+
20
+ - [2024-06-07] 🚀🚀🚀 We release the initial version of `Lumina-T2Music` for text-to-music generation.
21
+
22
+ ## Installation
23
+
24
+ Before installation, ensure that you have a working ``nvcc``
25
+
26
+ ```bash
27
+ # The command should work and show the same version number as in our case. (12.1 in our case).
28
+ nvcc --version
29
+ ```
30
+
31
+ On some outdated distros (e.g., CentOS 7), you may also want to check that a late enough version of
32
+ ``gcc`` is available
33
+
34
+ ```bash
35
+ # The command should work and show a version of at least 6.0.
36
+ # If not, consult distro-specific tutorials to obtain a newer version or build manually.
37
+ gcc --version
38
+ ```
39
+
40
+ Downloading Lumina-T2X repo from github:
41
+
42
+ ```bash
43
+ git clone https://github.com/Alpha-VLLM/Lumina-T2X
44
+ ```
45
+
46
+ ### 1. Create a conda environment and install PyTorch
47
+
48
+ Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
49
+
50
+ ```bash
51
+ conda create -n Lumina_T2X -y
52
+ conda activate Lumina_T2X
53
+ conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
54
+ ```
55
+
56
+ ### 2. Install dependencies
57
+
58
+ >[!Warning]
59
+ > The environment dependencies for Lumina-T2Music are different from those for Lumina-T2I. Please install the appropriate environment.
60
+
61
+ Installing `Lumina-T2Music` dependencies:
62
+
63
+ ```bash
64
+ cd .. # If you are in the `lumina_music` directory, execute this line.
65
+ pip install -e ".[music]"
66
+ ```
67
+
68
+ or you can use `requirements.txt` to install the environment.
69
+
70
+ ```bash
71
+ cd lumina_music # If you are not in the `lumina_music` folder, run this line.
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+ ### 3. Install ``flash-attn``
76
+
77
+ ```bash
78
+ pip install flash-attn --no-build-isolation
79
+ ```
80
+
81
+ ### 4. Install [nvidia apex](https://github.com/nvidia/apex) (optional)
82
+
83
+ >[!Warning]
84
+ > While Apex can improve efficiency, it is *not* a must to make Lumina-T2X work.
85
+ >
86
+ > Note that Lumina-T2X works smoothly with either:
87
+ > + Apex not installed at all; OR
88
+ > + Apex successfully installed with CUDA and C++ extensions.
89
+ >
90
+ > However, it will fail when:
91
+ > + A Python-only build of Apex is installed.
92
+ >
93
+ > If the error `No module named 'fused_layer_norm_cuda'` appears, it typically means you are using a Python-only build of Apex. To resolve this, please run `pip uninstall apex`, and Lumina-T2X should then function correctly.
94
+
95
+ You can clone the repo and install following the official guidelines (note that we expect a full
96
+ build, i.e., with CUDA and C++ extensions)
97
+
98
+ ```bash
99
+ pip install ninja
100
+ git clone https://github.com/NVIDIA/apex
101
+ cd apex
102
+ # if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
103
+ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
104
+ # otherwise
105
+ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
106
+ ```
107
+
108
+ ## Inference
109
+
110
+ ### Preparation
111
+
112
+ Prepare the pretrained checkpoints.
113
+
114
+ ⭐⭐ (Recommended) you can use `huggingface-cli` downloading our model:
115
+
116
+ ```bash
117
+ huggingface-cli download --resume-download Alpha-VLLM/Lumina-T2Music --local-dir /path/to/ckpt
118
+ ```
119
+
120
+ or using git for cloning the model you want to use:
121
+
122
+ ```bash
123
+ git clone https://huggingface.co/Alpha-VLLM/Lumina-T2Music
124
+ ```
125
+
126
+ ### Web Demo
127
+
128
+ To host a local gradio demo for interactive inference, run the following command:
129
+
130
+ 1. updated `AutoencoderKL` ckpt path
131
+
132
+ you should update `configs/lumina-text2music.yaml` to set `AutoencoderKL` checkpoint path. Please replace `/path/to/ckpt` with the path where your checkpoints are located (<real_path>).
133
+
134
+ ```diff
135
+ ...
136
+ depth: 16
137
+ max_len: 1000
138
+
139
+ first_stage_config:
140
+ target: models.autoencoder1d.AutoencoderKL
141
+ params:
142
+ embed_dim: 20
143
+ monitor: val/rec_loss
144
+ - ckpt_path: /path/to/ckpt/maa2/maa2.ckpt
145
+ + ckpt_path: <real_path>/maa2/maa2.ckpt
146
+ ddconfig:
147
+ double_z: true
148
+ in_channels: 80
149
+ out_ch: 80
150
+ ...
151
+ ```
152
+
153
+ 2. setting `Lumina-T2Music` and `Vocoder` checkpoint path and run demo
154
+
155
+ Please replace `/path/to/ckpt` with the actual downloaded path.
156
+
157
+ ```bash
158
+ # `/path/to/ckpt` should be a directory containing `music_generation`, `maa2`, and `bigvnat`.
159
+
160
+ # default
161
+ python -u demo_music.py \
162
+ --ckpt "/path/to/ckpt/music_generation" \
163
+ --vocoder_ckpt "/path/to/ckpt/bigvnat" \
164
+ --config_path "configs/lumina-text2music.yaml" \
165
+ --sample_rate 16000
166
+ ```
167
+
168
+ ## Disclaimer
169
+
170
+ Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.
bigvnat/args.yml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ resblock: '1'
2
+ num_gpus: 0
3
+ batch_size: 64
4
+ num_mels: 80
5
+ learning_rate: 0.0001
6
+ adam_b1: 0.8
7
+ adam_b2: 0.99
8
+ lr_decay: 0.999
9
+ seed: 1234
10
+ upsample_rates:
11
+ - 4
12
+ - 4
13
+ - 2
14
+ - 2
15
+ - 2
16
+ - 2
17
+ upsample_kernel_sizes:
18
+ - 8
19
+ - 8
20
+ - 4
21
+ - 4
22
+ - 4
23
+ - 4
24
+ upsample_initial_channel: 1536
25
+ resblock_kernel_sizes:
26
+ - 3
27
+ - 7
28
+ - 11
29
+ resblock_dilation_sizes:
30
+ - - 1
31
+ - 3
32
+ - 5
33
+ - - 1
34
+ - 3
35
+ - 5
36
+ - - 1
37
+ - 3
38
+ - 5
39
+ activation: snakebeta
40
+ snake_logscale: true
41
+ resolutions:
42
+ - - 1024
43
+ - 120
44
+ - 600
45
+ - - 2048
46
+ - 240
47
+ - 1200
48
+ - - 512
49
+ - 50
50
+ - 240
51
+ mpd_reshapes:
52
+ - 2
53
+ - 3
54
+ - 5
55
+ - 7
56
+ - 11
57
+ use_spectral_norm: false
58
+ discriminator_channel_mult: 1
59
+ num_workers: 4
60
+ dist_config:
61
+ dist_backend: nccl
62
+ dist_url: tcp://localhost:54341
63
+ world_size: 1
bigvnat/best_netG.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:970ca75ee4d5ce583e9396a4534acb14971ea2b4f1c22e038f476680c868a789
3
+ size 449217313
maa2/maa2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7621db6654b1a96398cf20edd6bc783ba8b7d4bc074e2ac42d609f0426480f19
3
+ size 7308070914
music_generation/119.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:988e12b6ac963968a9214a5ec6a91abb1904b299639c54173a6432ea1f3631c8
3
+ size 4592631803