Spaces:
Runtime error
Runtime error
File size: 9,465 Bytes
0d80816 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
# Accelerating Diffusion-based Singing Voice Conversion through Consistency Distillation
<br>
<div align="center">
<img src="../../../imgs/svc/DiffComoSVC.png" width="90%">
</div>
<br>
This is an implement of [Consistency Models](https://arxiv.org/abs/2303.01469) for accelerating diffusion-based singing voice conversion. The overall architecture follows "[Leveraging Content-based Features from Multiple Acoustic Models for Singing Voice Conversion](https://arxiv.org/abs/2310.11160)" (NeurIPS 2023 Workshop on Machine Learning for Audio), only a slightly modification is applied on acoustic model. Specifically,
* The acoustic model is a conformer which generates a coarse spectrogram and a diffusion decoder based on Bidirectional Non-Causal Dilated CNN which polish the former spectrogram for better. This is similar to [CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model](https://comospeech.github.io/)
* To accelerate diffusion model, we apply consistency distillation from [Consistency Models](https://arxiv.org/abs/2303.01469). For teacher model, the diffusion schedule of the diffusion decoder follows [karras diffusion](https://arxiv.org/abs/2206.00364). For distilling teacher model, the condition encoder and the conformer part of acoustic model are frozen while the diffusion decoder model is updated via exponential moving average. See Figure above for details.
There are five stages in total:
1. Data preparation
2. Features extraction
3. Teacher Model Training
4. Consistency Distillation
5. Inference/conversion
## 1. Data Preparation
### Dataset Download
By default, we utilize the five datasets for training: M4Singer, Opencpop, OpenSinger, SVCC, and VCTK. How to download them is detailed [here](../../datasets/README.md).
### Configuration
Specify the dataset paths in `exp_config.json`. Note that you can change the `dataset` list to use your preferred datasets.
```json
"dataset": [
"m4singer",
"opencpop",
"opensinger",
"svcc",
"vctk"
],
"dataset_path": {
// TODO: Fill in your dataset path
"m4singer": "[M4Singer dataset path]",
"opencpop": "[Opencpop dataset path]",
"opensinger": "[OpenSinger dataset path]",
"svcc": "[SVCC dataset path]",
"vctk": "[VCTK dataset path]"
},
```
## 2. Features Extraction
### Content-based Pretrained Models Download
By default, we utilize the Whisper and ContentVec to extract content features. How to download them is detailed [here](../../../pretrained/README.md).
### Configuration
Specify the dataset path and the output path for saving the processed data and the training model in `exp_config.json`:
```json
// TODO: Fill in the output log path
"log_dir": "[Your path to save logs and checkpoints]",
"preprocess": {
// TODO: Fill in the output data path
"processed_dir": "[Your path to save processed data]",
...
},
```
### Run
Run the `run.sh` as the preproces stage (set `--stage 1`).
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 1
```
Note: The `CUDA_VISIBLE_DEVICES` is set as `"0"` in default. You can change it when running `run.sh` by specifying such as `--gpu "1"`.
## 3. Teacher Model Training
### Configuration
Set the `distill` in `config/comosvc.json` to `false` for teacher model training, you can also specify the detailed configuration for conformer encoder and diffusion process here:
```JSON
"comosvc":{
"distill": false,
// conformer encoder
"input_dim": 384,
"output_dim": 100,
"n_heads": 2,
"n_layers": 6,
"filter_channels":512,
// karras diffusion
"P_mean": -1.2,
"P_std": 1.2,
"sigma_data": 0.5,
"sigma_min": 0.002,
"sigma_max": 80,
"rho": 7,
"n_timesteps": 40,
},
```
We provide the default hyparameters in the `exp_config.json`. They can work on single NVIDIA-24g GPU. You can adjust them based on you GPU machines.
```json
"train": {
"batch_size": 32,
...
"adamw": {
"lr": 2.0e-4
},
...
}
```
### Run
Run the `run.sh` as the training stage (set `--stage 2`). Specify a experimental name to run the following command. The tensorboard logs and checkpoints will be saved in `[Your path to save logs and checkpoints]/[YourExptName]`.
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 2 --name [YourExptName]
```
Note: The `CUDA_VISIBLE_DEVICES` is set as `"0"` in default. You can specify it when running `run.sh` such as:
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 2 --name [YourExptName] --gpu "0,1,2,3"
```
## 4. Consistency Distillation
### Configuration
Set the `distill` in `config/comosvc.json` to `true` for teacher model training, and specify the `teacher_model_path` for consistency distillation. You can also specify the detailed configuration for conformer encoder and diffusion process here:
```JSON
"model": {
"teacher_model_path":"[Your_teacher_model_checkpoint].bin",
...
"comosvc":{
"distill": true,
// conformer encoder
"input_dim": 384,
"output_dim": 100,
"n_heads": 2,
"n_layers": 6,
"filter_channels":512,
// karras diffusion
"P_mean": -1.2,
"P_std": 1.2,
"sigma_data": 0.5,
"sigma_min": 0.002,
"sigma_max": 80,
"rho": 7,
"n_timesteps": 40,
},
```
We provide the default hyparameters in the `exp_config.json`. They can work on single NVIDIA-24g GPU. You can adjust them based on you GPU machines.
```json
"train": {
"batch_size": 32,
...
"adamw": {
"lr": 2.0e-4
},
...
}
```
### Run
Run the `run.sh` as the training stage (set `--stage 2`). Specify a experimental name to run the following command. The tensorboard logs and checkpoints will be saved in `[Your path to save logs and checkpoints]/[YourExptName]`.
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 2 --name [YourExptName]
```
Note: The `CUDA_VISIBLE_DEVICES` is set as `"0"` in default. You can specify it when running `run.sh` such as:
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 2 --name [YourExptName] --gpu "0,1,2,3"
```
## 5. Inference/Conversion
### Pretrained Vocoder Download
We fine-tune the official BigVGAN pretrained model with over 120 hours singing voice data. The benifits of fine-tuning has been investigated in our paper (see this [demo page](https://www.zhangxueyao.com/data/MultipleContentsSVC/vocoder.html)). The final pretrained singing voice vocoder is released [here](../../../pretrained/README.md#amphion-singing-bigvgan) (called `Amphion Singing BigVGAN`).
### Run
For inference/conversion, you need to specify the following configurations when running `run.sh`:
| Parameters | Description | Example |
| --------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| `--infer_expt_dir` | The experimental directory which contains `checkpoint` | `[Your path to save logs and checkpoints]/[YourExptName]` |
| `--infer_output_dir` | The output directory to save inferred audios. | `[Your path to save logs and checkpoints]/[YourExptName]/result` |
| `--infer_source_file` or `--infer_source_audio_dir` | The inference source (can be a json file or a dir). | The `infer_source_file` could be `[Your path to save processed data]/[YourDataset]/test.json`, and the `infer_source_audio_dir` is a folder which includes several audio files (*.wav, *.mp3 or *.flac). |
| `--infer_target_speaker` | The target speaker you want to convert into. You can refer to `[Your path to save logs and checkpoints]/[YourExptName]/singers.json` to choose a trained speaker. | For opencpop dataset, the speaker name would be `opencpop_female1`. |
| `--infer_key_shift` | How many semitones you want to transpose. | `"autoshfit"` (by default), `3`, `-3`, etc. |
For example, if you want to make `opencpop_female1` sing the songs in the `[Your Audios Folder]`, just run:
```bash
cd Amphion
sh egs/svc/DiffComoSVC/run.sh --stage 3 --gpu "0" \
--infer_expt_dir [Your path to save logs and checkpoints]/[YourExptName] \
--infer_output_dir [Your path to save logs and checkpoints]/[YourExptName]/result \
--infer_source_audio_dir [Your Audios Folder] \
--infer_target_speaker "opencpop_female1" \
--infer_key_shift "autoshift"
```
Specially, you can configurate the inference steps for teacher model by setting `inference` at `exp_config`(student model is always one-step sampling):
```json
"inference": {
"comosvc": {
"inference_steps": 40
}
}
```
# Reference
https://github.com/zhenye234/CoMoSpeech
https://github.com/openai/consistency_models |