Ccbb121 commited on
Commit
671a784
1 Parent(s): 50685c9
readme.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech
2
+
3
+ #### Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, Yi Ren
4
+
5
+ PyTorch Implementation of [ProDiff (ACM Multimedia'22)](https://arxiv.org/abs/2207.06389): a conditional diffusion probabilistic model capable of generating high fidelity speech efficiently.
6
+
7
+ [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2207.06389)
8
+ [![GitHub Stars](https://img.shields.io/github/stars/Rongjiehuang/ProDiff?style=social)](https://github.com/Rongjiehuang/ProDiff)
9
+ ![visitors](https://visitor-badge.glitch.me/badge?page_id=Rongjiehuang/ProDiff)
10
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue)](https://huggingface.co/spaces/Rongjiehuang/ProDiff)
11
+
12
+ We provide our implementation and pretrained models as open source in this repository.
13
+
14
+ Visit our [demo page](https://prodiff.github.io/) for audio samples.
15
+
16
+ ## News
17
+ - April, 2022: Our previous work **[FastDiff](https://arxiv.org/abs/2204.09934) (IJCAI 2022)** released in [Github](https://github.com/Rongjiehuang/FastDiff).
18
+ - September, 2022: **[ProDiff](https://arxiv.org/abs/2207.06389) (ACM Multimedia 2022)** released in Github.
19
+
20
+ ## Key Features
21
+ - **Extremely-Fast** diffusion text-to-speech synthesis pipeline for potential **industrial deployment**.
22
+ - **Tutorial and code base** for speech diffusion models.
23
+ - More **supported diffusion mechanism** (e.g., guided diffusion) will be available.
24
+
25
+ ## Quick Started
26
+ We provide an example of how you can generate high-fidelity samples using ProDiff.
27
+
28
+ To try on your own dataset, simply clone this repo in your local machine provided with NVIDIA GPU + CUDA cuDNN and follow the below instructions.
29
+
30
+ ### Support Datasets and Pretrained Models
31
+
32
+ Simply run following command to download the weights
33
+ ```python
34
+ from huggingface_hub import snapshot_download
35
+ downloaded_path = snapshot_download(repo_id="Rongjiehuang/ProDiff")
36
+ ```
37
+
38
+ and move the downloaded checkpoints to `checkpoints/$Model/model_ckpt_steps_*.ckpt`
39
+ ```bash
40
+ mv ${downloaded_path}/checkpoints/ checkpoints/
41
+ ```
42
+
43
+ Details of each folder are as in follows:
44
+
45
+ | Model | Dataset | Config |
46
+ |-------------------|-------------|-------------------------------------------------|
47
+ | ProDiff Teacher | LJSpeech | `modules/ProDiff/config/prodiff_teacher.yaml` |
48
+ | ProDiff | LJSpeech | `modules/ProDiff/config/prodiff.yaml` |
49
+
50
+
51
+ More supported datasets are coming soon.
52
+
53
+
54
+
55
+ ### Dependencies
56
+ See requirements in `requirement.txt`:
57
+ - [pytorch](https://github.com/pytorch/pytorch)
58
+ - [librosa](https://github.com/librosa/librosa)
59
+ - [NATSpeech](https://github.com/NATSpeech/NATSpeech)
60
+
61
+ ### Multi-GPU
62
+ By default, this implementation uses as many GPUs in parallel as returned by `torch.cuda.device_count()`.
63
+ You can specify which GPUs to use by setting the `CUDA_DEVICES_AVAILABLE` environment variable before running the training module.
64
+
65
+ ## Extremely-Fast Text-to-Speech with diffusion probabilistic models
66
+
67
+ Here we provide a speech synthesis pipeline using diffusion probabilistic models: ProDiff (acoustic model) + FastDiff (neural vocoder). [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue)](https://huggingface.co/spaces/Rongjiehuang/ProDiff)
68
+
69
+ 1. Prepare acoustic model (ProDiff or ProDiff Teacher): Download LJSpeech checkpoint and put it in `checkpoints/ProDiff` or `checkpoints/ProDiff_Teacher`
70
+ 2. Prepare neural vocoder (FastDiff): Download LJSpeech checkpoint and put it in `checkpoints/FastDiff`
71
+
72
+ 3. Specify the input `$text`, and set `N` for reverse sampling in neural vocoder, which is a trade off between quality and speed.
73
+ 4. Run the following command for extreme fast speed `(2-iter ProDiff + 4-iter FastDiff)`:
74
+ ```bash
75
+ CUDA_VISIBLE_DEVICES=$GPU python inference/ProDiff.py --config modules/ProDiff/config/prodiff.yaml --exp_name ProDiff --hparams="N=4,text='$txt'" --reset
76
+ ```
77
+ Generated wav files are saved in `infer_out` by default.<br>
78
+ Note: For better quality, it's recommended to finetune the FastDiff neural vocoder [here](https://github.com/Rongjiehuang/FastDiff).
79
+
80
+ 5. Enjoy speed-quality trade-off: `(4-iter ProDiff Teacher + 6-iter FastDiff)`:
81
+ ```bash
82
+ CUDA_VISIBLE_DEVICES=$GPU python inference/ProDiff_teacher.py --config modules/ProDiff/config/prodiff_teacher.yaml --exp_name ProDiff_Teacher --hparams="N=6,text='$txt'" --reset
83
+ ```
84
+
85
+ # Train your own model
86
+
87
+ ### Data Preparation and Configuraion ##
88
+ 1. Set `raw_data_dir`, `processed_data_dir`, `binary_data_dir` in the config file
89
+ 2. Download dataset to `raw_data_dir`. Note: the dataset structure needs to follow `egs/datasets/audio/*/pre_align.py`, or you could rewrite `pre_align.py` according to your dataset.
90
+ 3. Preprocess Dataset
91
+ ```bash
92
+ # Preprocess step: unify the file structure.
93
+ python data_gen/tts/bin/pre_align.py --config $path/to/config
94
+ # Align step: MFA alignment.
95
+ python data_gen/tts/runs/train_mfa_align.py --config $CONFIG_NAME
96
+ # Binarization step: Binarize data for fast IO.
97
+ CUDA_VISIBLE_DEVICES=$GPU python data_gen/tts/bin/binarize.py --config $path/to/config
98
+ ```
99
+
100
+ You could also build a dataset via [NATSpeech](https://github.com/NATSpeech/NATSpeech), which shares a common MFA data-processing procedure.
101
+ We also provide our processed LJSpeech dataset [here](https://zjueducn-my.sharepoint.com/:f:/g/personal/rongjiehuang_zju_edu_cn/Eo7r83WZPK1GmlwvFhhIKeQBABZpYW3ec9c8WZoUV5HhbA?e=9QoWnf).
102
+
103
+
104
+ ### Training Teacher of ProDiff
105
+ ```bash
106
+ CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config modules/ProDiff/config/prodiff_teacher.yaml --exp_name ProDiff_Teacher --reset
107
+ ```
108
+
109
+ ### Training ProDiff
110
+ ```bash
111
+ CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config modules/ProDiff/config/prodiff.yaml --exp_name ProDiff --reset
112
+ ```
113
+
114
+ ### Inference using ProDiff Teacher
115
+
116
+ ```bash
117
+ CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config modules/ProDiff/config/prodiff_teacher.yaml --exp_name ProDiff_Teacher --infer
118
+ ```
119
+
120
+ ### Inference using ProDiff
121
+
122
+ ```bash
123
+ CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config modules/ProDiff/config/prodiff.yaml --exp_name ProDiff --infer
124
+ ```
125
+
126
+ ## Acknowledgements
127
+ This implementation uses parts of the code from the following Github repos:
128
+ [FastDiff](https://github.com/Rongjiehuang/FastDiff),
129
+ [DiffSinger](https://github.com/MoonInTheRiver/DiffSinger),
130
+ [NATSpeech](https://github.com/NATSpeech/NATSpeech),
131
+ as described in our code.
132
+
133
+ ## Citations ##
134
+ If you find this code useful in your research, please cite our work:
135
+ ```bib
136
+ @inproceedings{huang2022prodiff,
137
+ title={ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech},
138
+ author={Huang, Rongjie and Zhao, Zhou and Liu, Huadai and Liu, Jinglin and Cui, Chenye and Ren, Yi},
139
+ booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
140
+ year={2022}
141
+ }
142
+
143
+ @article{huang2022fastdiff,
144
+ title={FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis},
145
+ author={Huang, Rongjie and Lam, Max WY and Wang, Jun and Su, Dan and Yu, Dong and Ren, Yi and Zhao, Zhou},
146
+ booktitle = {Proceedings of the Thirty-First International Joint Conference on
147
+ Artificial Intelligence, {IJCAI-22}},
148
+ publisher = {International Joint Conferences on Artificial Intelligence Organization},
149
+ year={2022}
150
+ }
151
+ ```
152
+
153
+ ## Disclaimer ##
154
+ Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.
155
+ "# ProDiff"
requirements.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ matplotlib
2
+ librosa==0.8.0
3
+ tqdm
4
+ pandas
5
+ numba==0.53.1
6
+ numpy
7
+ scipy==1.3
8
+ PyYAML
9
+ tensorboardX
10
+ pyloudnorm
11
+ setuptools>=41.0.0
12
+ g2p_en
13
+ resemblyzer
14
+ webrtcvad
15
+ tensorboard==2.6.0
16
+ scikit-learn==0.24.1
17
+ scikit-image==0.16.2
18
+ textgrid
19
+ jiwer
20
+ pycwt
21
+ PyWavelets
22
+ praat-parselmouth==0.3.3
23
+ jieba
24
+ einops
25
+ chardet
usr/.gitkeep ADDED
File without changes
usr/__init__.py ADDED
File without changes
usr/diff/diffusion.py ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import random
3
+ from functools import partial
4
+ from inspect import isfunction
5
+ from pathlib import Path
6
+ import numpy as np
7
+ import torch
8
+ import torch.nn.functional as F
9
+ from torch import nn
10
+ from tqdm import tqdm
11
+ from einops import rearrange
12
+
13
+ from modules.fastspeech.fs2 import FastSpeech2
14
+ from utils.hparams import hparams
15
+
16
+
17
+
18
+ def exists(x):
19
+ return x is not None
20
+
21
+
22
+ def default(val, d):
23
+ if exists(val):
24
+ return val
25
+ return d() if isfunction(d) else d
26
+
27
+
28
+ def cycle(dl):
29
+ while True:
30
+ for data in dl:
31
+ yield data
32
+
33
+
34
+ def num_to_groups(num, divisor):
35
+ groups = num // divisor
36
+ remainder = num % divisor
37
+ arr = [divisor] * groups
38
+ if remainder > 0:
39
+ arr.append(remainder)
40
+ return arr
41
+
42
+
43
+ class Residual(nn.Module):
44
+ def __init__(self, fn):
45
+ super().__init__()
46
+ self.fn = fn
47
+
48
+ def forward(self, x, *args, **kwargs):
49
+ return self.fn(x, *args, **kwargs) + x
50
+
51
+
52
+ class SinusoidalPosEmb(nn.Module):
53
+ def __init__(self, dim):
54
+ super().__init__()
55
+ self.dim = dim
56
+
57
+ def forward(self, x):
58
+ device = x.device
59
+ half_dim = self.dim // 2
60
+ emb = math.log(10000) / (half_dim - 1)
61
+ emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
62
+ emb = x[:, None] * emb[None, :]
63
+ emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
64
+ return emb
65
+
66
+
67
+ class Mish(nn.Module):
68
+ def forward(self, x):
69
+ return x * torch.tanh(F.softplus(x))
70
+
71
+
72
+ class Upsample(nn.Module):
73
+ def __init__(self, dim):
74
+ super().__init__()
75
+ self.conv = nn.ConvTranspose2d(dim, dim, 4, 2, 1)
76
+
77
+ def forward(self, x):
78
+ return self.conv(x)
79
+
80
+
81
+ class Downsample(nn.Module):
82
+ def __init__(self, dim):
83
+ super().__init__()
84
+ self.conv = nn.Conv2d(dim, dim, 3, 2, 1)
85
+
86
+ def forward(self, x):
87
+ return self.conv(x)
88
+
89
+
90
+ class Rezero(nn.Module):
91
+ def __init__(self, fn):
92
+ super().__init__()
93
+ self.fn = fn
94
+ self.g = nn.Parameter(torch.zeros(1))
95
+
96
+ def forward(self, x):
97
+ return self.fn(x) * self.g
98
+
99
+
100
+ # building block modules
101
+
102
+ class Block(nn.Module):
103
+ def __init__(self, dim, dim_out, groups=8):
104
+ super().__init__()
105
+ self.block = nn.Sequential(
106
+ nn.Conv2d(dim, dim_out, 3, padding=1),
107
+ nn.GroupNorm(groups, dim_out),
108
+ Mish()
109
+ )
110
+
111
+ def forward(self, x):
112
+ return self.block(x)
113
+
114
+
115
+ class ResnetBlock(nn.Module):
116
+ def __init__(self, dim, dim_out, *, time_emb_dim, groups=8):
117
+ super().__init__()
118
+ self.mlp = nn.Sequential(
119
+ Mish(),
120
+ nn.Linear(time_emb_dim, dim_out)
121
+ )
122
+
123
+ self.block1 = Block(dim, dim_out)
124
+ self.block2 = Block(dim_out, dim_out)
125
+ self.res_conv = nn.Conv2d(dim, dim_out, 1) if dim != dim_out else nn.Identity()
126
+
127
+ def forward(self, x, time_emb):
128
+ h = self.block1(x)
129
+ h += self.mlp(time_emb)[:, :, None, None]
130
+ h = self.block2(h)
131
+ return h + self.res_conv(x)
132
+
133
+
134
+ class LinearAttention(nn.Module):
135
+ def __init__(self, dim, heads=4, dim_head=32):
136
+ super().__init__()
137
+ self.heads = heads
138
+ hidden_dim = dim_head * heads
139
+ self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False)
140
+ self.to_out = nn.Conv2d(hidden_dim, dim, 1)
141
+
142
+ def forward(self, x):
143
+ b, c, h, w = x.shape
144
+ qkv = self.to_qkv(x)
145
+ q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads=self.heads, qkv=3)
146
+ k = k.softmax(dim=-1)
147
+ context = torch.einsum('bhdn,bhen->bhde', k, v)
148
+ out = torch.einsum('bhde,bhdn->bhen', context, q)
149
+ out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
150
+ return self.to_out(out)
151
+
152
+
153
+ # gaussian diffusion trainer class
154
+
155
+ def extract(a, t, x_shape):
156
+ b, *_ = t.shape
157
+ out = a.gather(-1, t)
158
+ return out.reshape(b, *((1,) * (len(x_shape) - 1)))
159
+
160
+
161
+ def noise_like(shape, device, repeat=False):
162
+ repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
163
+ noise = lambda: torch.randn(shape, device=device)
164
+ return repeat_noise() if repeat else noise()
165
+
166
+
167
+ def cosine_beta_schedule(timesteps, s=0.008):
168
+ """
169
+ cosine schedule
170
+ as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
171
+ """
172
+ steps = timesteps + 1
173
+ x = np.linspace(0, steps, steps)
174
+ alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
175
+ alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
176
+ betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
177
+ return np.clip(betas, a_min=0, a_max=0.999)
178
+
179
+
180
+ class GaussianDiffusion(nn.Module):
181
+ def __init__(self, phone_encoder, out_dims, denoise_fn,
182
+ timesteps=1000, loss_type='l1', betas=None, spec_min=None, spec_max=None):
183
+ super().__init__()
184
+ self.denoise_fn = denoise_fn
185
+ if hparams.get('use_midi') is not None and hparams['use_midi']:
186
+ self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims)
187
+ else:
188
+ self.fs2 = FastSpeech2(phone_encoder, out_dims)
189
+ self.fs2.decoder = None
190
+ self.mel_bins = out_dims
191
+
192
+ if exists(betas):
193
+ betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
194
+ else:
195
+ betas = cosine_beta_schedule(timesteps)
196
+
197
+ alphas = 1. - betas
198
+ alphas_cumprod = np.cumprod(alphas, axis=0)
199
+ alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
200
+
201
+ timesteps, = betas.shape
202
+ self.num_timesteps = int(timesteps)
203
+ self.loss_type = loss_type
204
+
205
+ to_torch = partial(torch.tensor, dtype=torch.float32)
206
+
207
+ self.register_buffer('betas', to_torch(betas))
208
+ self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
209
+ self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
210
+
211
+ # calculations for diffusion q(x_t | x_{t-1}) and others
212
+ self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
213
+ self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
214
+ self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
215
+ self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
216
+ self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
217
+
218
+ # calculations for posterior q(x_{t-1} | x_t, x_0)
219
+ posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
220
+ # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
221
+ self.register_buffer('posterior_variance', to_torch(posterior_variance))
222
+ # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
223
+ self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
224
+ self.register_buffer('posterior_mean_coef1', to_torch(
225
+ betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
226
+ self.register_buffer('posterior_mean_coef2', to_torch(
227
+ (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
228
+
229
+ self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
230
+ self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
231
+
232
+ def q_mean_variance(self, x_start, t):
233
+ mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
234
+ variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
235
+ log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
236
+ return mean, variance, log_variance
237
+
238
+ def predict_start_from_noise(self, x_t, t, noise):
239
+ return (
240
+ extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
241
+ extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
242
+ )
243
+
244
+ def q_posterior(self, x_start, x_t, t):
245
+ posterior_mean = (
246
+ extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
247
+ extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
248
+ )
249
+ posterior_variance = extract(self.posterior_variance, t, x_t.shape)
250
+ posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
251
+ return posterior_mean, posterior_variance, posterior_log_variance_clipped
252
+
253
+ def p_mean_variance(self, x, t, cond, clip_denoised: bool):
254
+ noise_pred = self.denoise_fn(x, t, cond=cond)
255
+ x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
256
+
257
+ if clip_denoised:
258
+ x_recon.clamp_(-1., 1.)
259
+
260
+ model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
261
+ return model_mean, posterior_variance, posterior_log_variance
262
+
263
+ @torch.no_grad()
264
+ def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
265
+ b, *_, device = *x.shape, x.device
266
+ model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
267
+ noise = noise_like(x.shape, device, repeat_noise)
268
+ # no noise when t == 0
269
+ nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
270
+ return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
271
+
272
+ def q_sample(self, x_start, t, noise=None):
273
+ noise = default(noise, lambda: torch.randn_like(x_start))
274
+ return (
275
+ extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
276
+ extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
277
+ )
278
+
279
+ def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
280
+ noise = default(noise, lambda: torch.randn_like(x_start))
281
+
282
+ x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
283
+ x_recon = self.denoise_fn(x_noisy, t, cond)
284
+
285
+ if self.loss_type == 'l1':
286
+ if nonpadding is not None:
287
+ loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
288
+ else:
289
+ # print('are you sure w/o nonpadding?')
290
+ loss = (noise - x_recon).abs().mean()
291
+
292
+ elif self.loss_type == 'l2':
293
+ loss = F.mse_loss(noise, x_recon)
294
+ else:
295
+ raise NotImplementedError()
296
+
297
+ return loss
298
+
299
+ def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
300
+ ref_mels=None, f0=None, uv=None, energy=None, infer=False):
301
+ b, *_, device = *txt_tokens.shape, txt_tokens.device
302
+ ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
303
+ skip_decoder=True, infer=infer)
304
+ cond = ret['decoder_inp'].transpose(1, 2)
305
+ if not infer:
306
+ t = torch.randint(0, self.num_timesteps, (b,), device=device).long()
307
+ x = ref_mels
308
+ x = self.norm_spec(x)
309
+ x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
310
+ nonpadding = (mel2ph != 0).float()
311
+ ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding)
312
+ else:
313
+ t = self.num_timesteps
314
+ shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
315
+ x = torch.randn(shape, device=device)
316
+ for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
317
+ x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
318
+ x = x[:, 0].transpose(1, 2)
319
+ ret['mel_out'] = self.denorm_spec(x)
320
+
321
+ return ret
322
+
323
+ def norm_spec(self, x):
324
+ return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
325
+
326
+ def denorm_spec(self, x):
327
+ return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
328
+
329
+ def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph):
330
+ return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph)
331
+
332
+ def out2mel(self, x):
333
+ return x
usr/diff/net.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+
3
+ import torch
4
+ import torch.nn as nn
5
+ import torch.nn.functional as F
6
+
7
+ from math import sqrt
8
+
9
+ from .diffusion import Mish
10
+ from utils.hparams import hparams
11
+
12
+ Linear = nn.Linear
13
+ ConvTranspose2d = nn.ConvTranspose2d
14
+
15
+
16
+ class AttrDict(dict):
17
+ def __init__(self, *args, **kwargs):
18
+ super(AttrDict, self).__init__(*args, **kwargs)
19
+ self.__dict__ = self
20
+
21
+ def override(self, attrs):
22
+ if isinstance(attrs, dict):
23
+ self.__dict__.update(**attrs)
24
+ elif isinstance(attrs, (list, tuple, set)):
25
+ for attr in attrs:
26
+ self.override(attr)
27
+ elif attrs is not None:
28
+ raise NotImplementedError
29
+ return self
30
+
31
+
32
+ class SinusoidalPosEmb(nn.Module):
33
+ def __init__(self, dim):
34
+ super().__init__()
35
+ self.dim = dim
36
+
37
+ def forward(self, x):
38
+ device = x.device
39
+ half_dim = self.dim // 2
40
+ emb = math.log(10000) / (half_dim - 1)
41
+ emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
42
+ emb = x[:, None] * emb[None, :]
43
+ emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
44
+ return emb
45
+
46
+
47
+ def Conv1d(*args, **kwargs):
48
+ layer = nn.Conv1d(*args, **kwargs)
49
+ nn.init.kaiming_normal_(layer.weight)
50
+ return layer
51
+
52
+
53
+ @torch.jit.script
54
+ def silu(x):
55
+ return x * torch.sigmoid(x)
56
+
57
+
58
+ class ResidualBlock(nn.Module):
59
+ def __init__(self, encoder_hidden, residual_channels, dilation):
60
+ super().__init__()
61
+ self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation)
62
+ self.diffusion_projection = Linear(residual_channels, residual_channels)
63
+ self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1)
64
+ self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1)
65
+
66
+ def forward(self, x, conditioner, diffusion_step):
67
+ diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1)
68
+ conditioner = self.conditioner_projection(conditioner)
69
+ y = x + diffusion_step
70
+
71
+ y = self.dilated_conv(y) + conditioner
72
+
73
+ gate, filter = torch.chunk(y, 2, dim=1)
74
+ y = torch.sigmoid(gate) * torch.tanh(filter)
75
+
76
+ y = self.output_projection(y)
77
+ residual, skip = torch.chunk(y, 2, dim=1)
78
+ return (x + residual) / sqrt(2.0), skip
79
+
80
+
81
+ class DiffNet(nn.Module):
82
+ def __init__(self, in_dims=80):
83
+ super().__init__()
84
+ self.params = params = AttrDict(
85
+ # Model params
86
+ encoder_hidden=hparams['hidden_size'],
87
+ residual_layers=hparams['residual_layers'],
88
+ residual_channels=hparams['residual_channels'],
89
+ dilation_cycle_length=hparams['dilation_cycle_length'],
90
+ )
91
+ self.input_projection = Conv1d(in_dims, params.residual_channels, 1)
92
+ self.diffusion_embedding = SinusoidalPosEmb(params.residual_channels)
93
+ dim = params.residual_channels
94
+ self.mlp = nn.Sequential(
95
+ nn.Linear(dim, dim * 4),
96
+ Mish(),
97
+ nn.Linear(dim * 4, dim)
98
+ )
99
+ self.residual_layers = nn.ModuleList([
100
+ ResidualBlock(params.encoder_hidden, params.residual_channels, 2 ** (i % params.dilation_cycle_length))
101
+ for i in range(params.residual_layers)
102
+ ])
103
+ self.skip_projection = Conv1d(params.residual_channels, params.residual_channels, 1)
104
+ self.output_projection = Conv1d(params.residual_channels, in_dims, 1)
105
+ nn.init.zeros_(self.output_projection.weight)
106
+
107
+ def forward(self, spec, diffusion_step, cond):
108
+ """
109
+
110
+ :param spec: [B, 1, M, T]
111
+ :param diffusion_step: [B, 1]
112
+ :param cond: [B, M, T]
113
+ :return:
114
+ """
115
+ x = spec[:, 0]
116
+ x = self.input_projection(x) # x [B, residual_channel, T]
117
+
118
+ x = F.relu(x)
119
+ diffusion_step = self.diffusion_embedding(diffusion_step)
120
+ diffusion_step = self.mlp(diffusion_step)
121
+ skip = []
122
+ for layer_id, layer in enumerate(self.residual_layers):
123
+ x, skip_connection = layer(x, cond, diffusion_step)
124
+ skip.append(skip_connection)
125
+
126
+ x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers))
127
+ x = self.skip_projection(x)
128
+ x = F.relu(x)
129
+ x = self.output_projection(x) # [B, 80, T]
130
+ return x[:, None, :, :]
usr/diff/shallow_diffusion_tts.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import random
3
+ from functools import partial
4
+ from inspect import isfunction
5
+ from pathlib import Path
6
+ import numpy as np
7
+ import torch
8
+ import torch.nn.functional as F
9
+ from torch import nn
10
+ from tqdm import tqdm
11
+ from einops import rearrange
12
+
13
+ from modules.fastspeech.fs2 import FastSpeech2
14
+ from utils.hparams import hparams
15
+
16
+ def vpsde_beta_t(t, T, min_beta, max_beta):
17
+ t_coef = (2 * t - 1) / (T ** 2)
18
+ return 1. - np.exp(-min_beta / T - 0.5 * (max_beta - min_beta) * t_coef)
19
+
20
+ def _logsnr_schedule_cosine(t, *, logsnr_min, logsnr_max):
21
+ b = np.arctan(np.exp(-0.5 * logsnr_max))
22
+ a = np.arctan(np.exp(-0.5 * logsnr_min)) - b
23
+ return -2. * np.log(np.tan(a * t + b))
24
+
25
+
26
+ def get_noise_schedule_list(schedule_mode, timesteps, min_beta=0.0, max_beta=0.01, s=0.008):
27
+ if schedule_mode == "linear":
28
+ schedule_list = np.linspace(0.000001, 0.01, timesteps)
29
+ elif schedule_mode == "cosine":
30
+ steps = timesteps + 1
31
+ x = np.linspace(0, steps, steps)
32
+ alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
33
+ alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
34
+ betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
35
+ schedule_list = np.clip(betas, a_min=0, a_max=0.999)
36
+ elif schedule_mode == "vpsde":
37
+ schedule_list = np.array([
38
+ vpsde_beta_t(t, timesteps, min_beta, max_beta) for t in range(1, timesteps + 1)])
39
+ elif schedule_mode == "logsnr":
40
+ u = np.array([t for t in range(0, timesteps + 1)])
41
+ schedule_list = np.array([
42
+ _logsnr_schedule_cosine(t / timesteps, logsnr_min=-20.0, logsnr_max=20.0) for t in range(1, timesteps + 1)])
43
+ else:
44
+ raise NotImplementedError
45
+ return schedule_list
46
+
47
+ def exists(x):
48
+ return x is not None
49
+
50
+
51
+ def default(val, d):
52
+ if exists(val):
53
+ return val
54
+ return d() if isfunction(d) else d
55
+
56
+
57
+ # gaussian diffusion trainer class
58
+
59
+ def extract(a, t, x_shape):
60
+ b, *_ = t.shape
61
+ out = a.gather(-1, t)
62
+ return out.reshape(b, *((1,) * (len(x_shape) - 1)))
63
+
64
+
65
+ def noise_like(shape, device, repeat=False):
66
+ repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
67
+ noise = lambda: torch.randn(shape, device=device)
68
+ return repeat_noise() if repeat else noise()
69
+
70
+
71
+ def linear_beta_schedule(timesteps, max_beta=hparams.get('max_beta', 0.01)):
72
+ """
73
+ linear schedule
74
+ """
75
+ betas = np.linspace(1e-4, max_beta, timesteps)
76
+ return betas
77
+
78
+
79
+ def cosine_beta_schedule(timesteps, s=0.008):
80
+ """
81
+ cosine schedule
82
+ as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
83
+ """
84
+ steps = timesteps + 1
85
+ x = np.linspace(0, steps, steps)
86
+ alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
87
+ alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
88
+ betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
89
+ return np.clip(betas, a_min=0, a_max=0.999)
90
+
91
+
92
+ beta_schedule = {
93
+ "cosine": cosine_beta_schedule,
94
+ "linear": linear_beta_schedule,
95
+ }
96
+
97
+
98
+ class GaussianDiffusion(nn.Module):
99
+ def __init__(self, phone_encoder, out_dims, denoise_fn,
100
+ timesteps=1000, K_step=1000, loss_type=hparams.get('diff_loss_type', 'l1'), betas=None, spec_min=None, spec_max=None):
101
+ super().__init__()
102
+ self.denoise_fn = denoise_fn
103
+ if hparams.get('use_midi') is not None and hparams['use_midi']:
104
+ self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims)
105
+ else:
106
+ self.fs2 = FastSpeech2(phone_encoder, out_dims)
107
+ self.mel_bins = out_dims
108
+
109
+ if exists(betas):
110
+ betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
111
+ else:
112
+ if 'schedule_type' in hparams.keys():
113
+ betas = beta_schedule[hparams['schedule_type']](timesteps)
114
+ else:
115
+ betas = cosine_beta_schedule(timesteps)
116
+
117
+ alphas = 1. - betas
118
+ alphas_cumprod = np.cumprod(alphas, axis=0)
119
+ alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
120
+
121
+ timesteps, = betas.shape
122
+ self.num_timesteps = int(timesteps)
123
+ self.K_step = K_step
124
+ self.loss_type = loss_type
125
+
126
+ to_torch = partial(torch.tensor, dtype=torch.float32)
127
+
128
+ self.register_buffer('betas', to_torch(betas))
129
+ self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
130
+ self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
131
+
132
+ # calculations for diffusion q(x_t | x_{t-1}) and others
133
+ self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
134
+ self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
135
+ self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
136
+ self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
137
+ self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
138
+
139
+ # calculations for posterior q(x_{t-1} | x_t, x_0)
140
+ posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
141
+ # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
142
+ self.register_buffer('posterior_variance', to_torch(posterior_variance))
143
+ # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
144
+ self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
145
+ self.register_buffer('posterior_mean_coef1', to_torch(
146
+ betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
147
+ self.register_buffer('posterior_mean_coef2', to_torch(
148
+ (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
149
+
150
+ self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
151
+ self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
152
+
153
+ def q_mean_variance(self, x_start, t):
154
+ mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
155
+ variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
156
+ log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
157
+ return mean, variance, log_variance
158
+
159
+ def predict_start_from_noise(self, x_t, t, noise):
160
+ return (
161
+ extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
162
+ extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
163
+ )
164
+
165
+ def q_posterior(self, x_start, x_t, t):
166
+ posterior_mean = (
167
+ extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
168
+ extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
169
+ )
170
+ posterior_variance = extract(self.posterior_variance, t, x_t.shape)
171
+ posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
172
+ return posterior_mean, posterior_variance, posterior_log_variance_clipped
173
+
174
+ def p_mean_variance(self, x, t, cond, clip_denoised: bool):
175
+ noise_pred = self.denoise_fn(x, t, cond=cond)
176
+ x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
177
+
178
+ if clip_denoised:
179
+ x_recon.clamp_(-1., 1.)
180
+
181
+ model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
182
+ return model_mean, posterior_variance, posterior_log_variance
183
+
184
+ @torch.no_grad()
185
+ def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
186
+ b, *_, device = *x.shape, x.device
187
+ model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
188
+ noise = noise_like(x.shape, device, repeat_noise)
189
+ # no noise when t == 0
190
+ nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
191
+ return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
192
+
193
+ def q_sample(self, x_start, t, noise=None):
194
+ noise = default(noise, lambda: torch.randn_like(x_start))
195
+ return (
196
+ extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
197
+ extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
198
+ )
199
+
200
+ def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
201
+ noise = default(noise, lambda: torch.randn_like(x_start))
202
+
203
+ x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
204
+ x_recon = self.denoise_fn(x_noisy, t, cond)
205
+
206
+ if self.loss_type == 'l1':
207
+ if nonpadding is not None:
208
+ loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
209
+ else:
210
+ # print('are you sure w/o nonpadding?')
211
+ loss = (noise - x_recon).abs().mean()
212
+
213
+ elif self.loss_type == 'l2':
214
+ loss = F.mse_loss(noise, x_recon)
215
+ else:
216
+ raise NotImplementedError()
217
+
218
+ return loss
219
+
220
+ def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
221
+ ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
222
+ b, *_, device = *txt_tokens.shape, txt_tokens.device
223
+ ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
224
+ skip_decoder=(not infer), infer=infer, **kwargs)
225
+ cond = ret['decoder_inp'].transpose(1, 2)
226
+
227
+ if not infer:
228
+ t = torch.randint(0, self.K_step, (b,), device=device).long()
229
+ x = ref_mels
230
+ x = self.norm_spec(x)
231
+ x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
232
+ ret['diff_loss'] = self.p_losses(x, t, cond)
233
+ # nonpadding = (mel2ph != 0).float()
234
+ # ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding)
235
+ else:
236
+ ret['fs2_mel'] = ret['mel_out']
237
+ fs2_mels = ret['mel_out']
238
+ t = self.K_step
239
+ fs2_mels = self.norm_spec(fs2_mels)
240
+ fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
241
+
242
+ x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
243
+ if hparams.get('gaussian_start') is not None and hparams['gaussian_start']:
244
+ print('===> gaussion start.')
245
+ shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
246
+ x = torch.randn(shape, device=device)
247
+ for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
248
+ x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
249
+ x = x[:, 0].transpose(1, 2)
250
+ if mel2ph is not None: # for singing
251
+ ret['mel_out'] = self.denorm_spec(x) * ((mel2ph > 0).float()[:, :, None])
252
+ else:
253
+ ret['mel_out'] = self.denorm_spec(x)
254
+ return ret
255
+
256
+ # def norm_spec(self, x):
257
+ # return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
258
+ #
259
+ # def denorm_spec(self, x):
260
+ # return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
261
+
262
+ def norm_spec(self, x):
263
+ return x
264
+
265
+ def denorm_spec(self, x):
266
+ return x
267
+
268
+ def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph):
269
+ return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph)
270
+
271
+ def out2mel(self, x):
272
+ return x
273
+
274
+
275
+ class OfflineGaussianDiffusion(GaussianDiffusion):
276
+ def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
277
+ ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
278
+ b, *_, device = *txt_tokens.shape, txt_tokens.device
279
+
280
+ ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
281
+ skip_decoder=True, infer=True, **kwargs)
282
+ cond = ret['decoder_inp'].transpose(1, 2)
283
+ fs2_mels = ref_mels[1]
284
+ ref_mels = ref_mels[0]
285
+
286
+ if not infer:
287
+ t = torch.randint(0, self.K_step, (b,), device=device).long()
288
+ x = ref_mels
289
+ x = self.norm_spec(x)
290
+ x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
291
+ ret['diff_loss'] = self.p_losses(x, t, cond)
292
+ else:
293
+ t = self.K_step
294
+ fs2_mels = self.norm_spec(fs2_mels)
295
+ fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
296
+
297
+ x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
298
+
299
+ if hparams.get('gaussian_start') is not None and hparams['gaussian_start']:
300
+ print('===> gaussion start.')
301
+ shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
302
+ x = torch.randn(shape, device=device)
303
+ for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
304
+ x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
305
+ x = x[:, 0].transpose(1, 2)
306
+ ret['mel_out'] = self.denorm_spec(x)
307
+ return ret
usr/diffspeech_task.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+
3
+ import utils
4
+ from utils.hparams import hparams
5
+ from .diff.net import DiffNet
6
+ from .diff.shallow_diffusion_tts import GaussianDiffusion
7
+ from .task import DiffFsTask
8
+ from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder
9
+ from utils.pitch_utils import denorm_f0
10
+ from tasks.tts.fs2_utils import FastSpeechDataset
11
+
12
+ DIFF_DECODERS = {
13
+ 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
14
+ }
15
+
16
+
17
+ class DiffSpeechTask(DiffFsTask):
18
+ def __init__(self):
19
+ super(DiffSpeechTask, self).__init__()
20
+ self.dataset_cls = FastSpeechDataset
21
+ self.vocoder: BaseVocoder = get_vocoder_cls(hparams)()
22
+
23
+ def build_tts_model(self):
24
+ mel_bins = hparams['audio_num_mel_bins']
25
+ self.model = GaussianDiffusion(
26
+ phone_encoder=self.phone_encoder,
27
+ out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
28
+ timesteps=hparams['timesteps'],
29
+ K_step=hparams['K_step'],
30
+ loss_type=hparams['diff_loss_type'],
31
+ spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
32
+ )
33
+ if hparams['fs2_ckpt'] != '':
34
+ utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
35
+ # self.model.fs2.decoder = None
36
+ for k, v in self.model.fs2.named_parameters():
37
+ if not 'predictor' in k:
38
+ v.requires_grad = False
39
+
40
+ def build_optimizer(self, model):
41
+ self.optimizer = optimizer = torch.optim.AdamW(
42
+ filter(lambda p: p.requires_grad, model.parameters()),
43
+ lr=hparams['lr'],
44
+ betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
45
+ weight_decay=hparams['weight_decay'])
46
+ return optimizer
47
+
48
+ def run_model(self, model, sample, return_output=False, infer=False):
49
+ txt_tokens = sample['txt_tokens'] # [B, T_t]
50
+ target = sample['mels'] # [B, T_s, 80]
51
+ # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s]
52
+ mel2ph = sample['mel2ph']
53
+ f0 = sample['f0']
54
+ uv = sample['uv']
55
+ energy = sample['energy']
56
+ # fs2_mel = sample['fs2_mels']
57
+ spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
58
+ if hparams['pitch_type'] == 'cwt':
59
+ cwt_spec = sample[f'cwt_spec']
60
+ f0_mean = sample['f0_mean']
61
+ f0_std = sample['f0_std']
62
+ sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
63
+
64
+ output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
65
+ ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
66
+
67
+ losses = {}
68
+ if 'diff_loss' in output:
69
+ losses['mel'] = output['diff_loss']
70
+ self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
71
+ if hparams['use_pitch_embed']:
72
+ self.add_pitch_loss(output, sample, losses)
73
+ if hparams['use_energy_embed']:
74
+ self.add_energy_loss(output['energy_pred'], energy, losses)
75
+ if not return_output:
76
+ return losses
77
+ else:
78
+ return losses, output
79
+
80
+ def validation_step(self, sample, batch_idx):
81
+ outputs = {}
82
+ txt_tokens = sample['txt_tokens'] # [B, T_t]
83
+
84
+ energy = sample['energy']
85
+ spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
86
+ mel2ph = sample['mel2ph']
87
+ f0 = sample['f0']
88
+ uv = sample['uv']
89
+
90
+ outputs['losses'] = {}
91
+
92
+ outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
93
+
94
+
95
+ outputs['total_loss'] = sum(outputs['losses'].values())
96
+ outputs['nsamples'] = sample['nsamples']
97
+ outputs = utils.tensors_to_scalars(outputs)
98
+ if batch_idx < hparams['num_valid_plots']:
99
+ # model_out = self.model(
100
+ # txt_tokens, spk_embed=spk_embed, mel2ph=None, f0=None, uv=None, energy=None, ref_mels=None, inference=True)
101
+ # self.plot_mel(batch_idx, model_out['mel_out'], model_out['fs2_mel'], name=f'diffspeech_vs_fs2_{batch_idx}')
102
+ model_out = self.model(
103
+ txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True)
104
+ gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
105
+ self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=model_out.get('f0_denorm'))
106
+ self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'])
107
+ return outputs
108
+
109
+ ############
110
+ # validation plots
111
+ ############
112
+ def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
113
+ gt_wav = gt_wav[0].cpu().numpy()
114
+ wav_out = wav_out[0].cpu().numpy()
115
+ gt_f0 = gt_f0[0].cpu().numpy()
116
+ f0 = f0[0].cpu().numpy()
117
+ if is_mel:
118
+ gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
119
+ wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
120
+ self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
121
+ self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
122
+
usr/task.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+
3
+ import utils
4
+ from .diff.diffusion import GaussianDiffusion
5
+ from .diff.net import DiffNet
6
+ from tasks.tts.fs2 import FastSpeech2Task
7
+ from utils.hparams import hparams
8
+
9
+
10
+ DIFF_DECODERS = {
11
+ 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
12
+ }
13
+
14
+
15
+ class DiffFsTask(FastSpeech2Task):
16
+ def build_tts_model(self):
17
+ mel_bins = hparams['audio_num_mel_bins']
18
+ self.model = GaussianDiffusion(
19
+ phone_encoder=self.phone_encoder,
20
+ out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
21
+ timesteps=hparams['timesteps'],
22
+ loss_type=hparams['diff_loss_type'],
23
+ spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
24
+ )
25
+
26
+ def run_model(self, model, sample, return_output=False, infer=False):
27
+ txt_tokens = sample['txt_tokens'] # [B, T_t]
28
+ target = sample['mels'] # [B, T_s, 80]
29
+ mel2ph = sample['mel2ph'] # [B, T_s]
30
+ f0 = sample['f0']
31
+ uv = sample['uv']
32
+ energy = sample['energy']
33
+ spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
34
+ if hparams['pitch_type'] == 'cwt':
35
+ cwt_spec = sample[f'cwt_spec']
36
+ f0_mean = sample['f0_mean']
37
+ f0_std = sample['f0_std']
38
+ sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
39
+
40
+ output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
41
+ ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
42
+
43
+ losses = {}
44
+ if 'diff_loss' in output:
45
+ losses['mel'] = output['diff_loss']
46
+ self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
47
+ if hparams['use_pitch_embed']:
48
+ self.add_pitch_loss(output, sample, losses)
49
+ if hparams['use_energy_embed']:
50
+ self.add_energy_loss(output['energy_pred'], energy, losses)
51
+ if not return_output:
52
+ return losses
53
+ else:
54
+ return losses, output
55
+
56
+ def _training_step(self, sample, batch_idx, _):
57
+ log_outputs = self.run_model(self.model, sample)
58
+ total_loss = sum([v for v in log_outputs.values() if isinstance(v, torch.Tensor) and v.requires_grad])
59
+ log_outputs['batch_size'] = sample['txt_tokens'].size()[0]
60
+ log_outputs['lr'] = self.scheduler.get_lr()[0]
61
+ return total_loss, log_outputs
62
+
63
+ def validation_step(self, sample, batch_idx):
64
+ outputs = {}
65
+ outputs['losses'] = {}
66
+ outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
67
+ outputs['total_loss'] = sum(outputs['losses'].values())
68
+ outputs['nsamples'] = sample['nsamples']
69
+ outputs = utils.tensors_to_scalars(outputs)
70
+ if batch_idx < hparams['num_valid_plots']:
71
+ _, model_out = self.run_model(self.model, sample, return_output=True, infer=True)
72
+ self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'])
73
+ return outputs
utils/__init__.py ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ import sys
3
+ import types
4
+
5
+ import chardet
6
+ import numpy as np
7
+ import torch
8
+ import torch.distributed as dist
9
+ from utils.ckpt_utils import load_ckpt
10
+
11
+
12
+ def reduce_tensors(metrics):
13
+ new_metrics = {}
14
+ for k, v in metrics.items():
15
+ if isinstance(v, torch.Tensor):
16
+ dist.all_reduce(v)
17
+ v = v / dist.get_world_size()
18
+ if type(v) is dict:
19
+ v = reduce_tensors(v)
20
+ new_metrics[k] = v
21
+ return new_metrics
22
+
23
+
24
+ def tensors_to_scalars(tensors):
25
+ if isinstance(tensors, torch.Tensor):
26
+ tensors = tensors.item()
27
+ return tensors
28
+ elif isinstance(tensors, dict):
29
+ new_tensors = {}
30
+ for k, v in tensors.items():
31
+ v = tensors_to_scalars(v)
32
+ new_tensors[k] = v
33
+ return new_tensors
34
+ elif isinstance(tensors, list):
35
+ return [tensors_to_scalars(v) for v in tensors]
36
+ else:
37
+ return tensors
38
+
39
+
40
+ def tensors_to_np(tensors):
41
+ if isinstance(tensors, dict):
42
+ new_np = {}
43
+ for k, v in tensors.items():
44
+ if isinstance(v, torch.Tensor):
45
+ v = v.cpu().numpy()
46
+ if type(v) is dict:
47
+ v = tensors_to_np(v)
48
+ new_np[k] = v
49
+ elif isinstance(tensors, list):
50
+ new_np = []
51
+ for v in tensors:
52
+ if isinstance(v, torch.Tensor):
53
+ v = v.cpu().numpy()
54
+ if type(v) is dict:
55
+ v = tensors_to_np(v)
56
+ new_np.append(v)
57
+ elif isinstance(tensors, torch.Tensor):
58
+ v = tensors
59
+ if isinstance(v, torch.Tensor):
60
+ v = v.cpu().numpy()
61
+ if type(v) is dict:
62
+ v = tensors_to_np(v)
63
+ new_np = v
64
+ else:
65
+ raise Exception(f'tensors_to_np does not support type {type(tensors)}.')
66
+ return new_np
67
+
68
+
69
+ def move_to_cpu(tensors):
70
+ ret = {}
71
+ for k, v in tensors.items():
72
+ if isinstance(v, torch.Tensor):
73
+ v = v.cpu()
74
+ if type(v) is dict:
75
+ v = move_to_cpu(v)
76
+ ret[k] = v
77
+ return ret
78
+
79
+
80
+ def move_to_cuda(batch, gpu_id=0):
81
+ # base case: object can be directly moved using `cuda` or `to`
82
+ if callable(getattr(batch, 'cuda', None)):
83
+ return batch.cuda(gpu_id, non_blocking=True)
84
+ elif callable(getattr(batch, 'to', None)):
85
+ return batch.to(torch.device('cuda', gpu_id), non_blocking=True)
86
+ elif isinstance(batch, list):
87
+ for i, x in enumerate(batch):
88
+ batch[i] = move_to_cuda(x, gpu_id)
89
+ return batch
90
+ elif isinstance(batch, tuple):
91
+ batch = list(batch)
92
+ for i, x in enumerate(batch):
93
+ batch[i] = move_to_cuda(x, gpu_id)
94
+ return tuple(batch)
95
+ elif isinstance(batch, dict):
96
+ for k, v in batch.items():
97
+ batch[k] = move_to_cuda(v, gpu_id)
98
+ return batch
99
+ return batch
100
+
101
+
102
+ class AvgrageMeter(object):
103
+
104
+ def __init__(self):
105
+ self.reset()
106
+
107
+ def reset(self):
108
+ self.avg = 0
109
+ self.sum = 0
110
+ self.cnt = 0
111
+
112
+ def update(self, val, n=1):
113
+ self.sum += val * n
114
+ self.cnt += n
115
+ self.avg = self.sum / self.cnt
116
+
117
+
118
+ def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1):
119
+ """Convert a list of 1d tensors into a padded 2d tensor."""
120
+ size = max(v.size(0) for v in values) if max_len is None else max_len
121
+ res = values[0].new(len(values), size).fill_(pad_idx)
122
+
123
+ def copy_tensor(src, dst):
124
+ assert dst.numel() == src.numel()
125
+ if shift_right:
126
+ dst[1:] = src[:-1]
127
+ dst[0] = shift_id
128
+ else:
129
+ dst.copy_(src)
130
+
131
+ for i, v in enumerate(values):
132
+ copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
133
+ return res
134
+
135
+
136
+ def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None):
137
+ """Convert a list of 2d tensors into a padded 3d tensor."""
138
+ size = max(v.size(0) for v in values) if max_len is None else max_len
139
+ res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx)
140
+
141
+ def copy_tensor(src, dst):
142
+ assert dst.numel() == src.numel()
143
+ if shift_right:
144
+ dst[1:] = src[:-1]
145
+ else:
146
+ dst.copy_(src)
147
+
148
+ for i, v in enumerate(values):
149
+ copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
150
+ return res
151
+
152
+
153
+ def _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
154
+ if len(batch) == 0:
155
+ return 0
156
+ if len(batch) == max_sentences:
157
+ return 1
158
+ if num_tokens > max_tokens:
159
+ return 1
160
+ return 0
161
+
162
+
163
+ def batch_by_size(
164
+ indices, num_tokens_fn, max_tokens=None, max_sentences=None,
165
+ required_batch_size_multiple=1, distributed=False
166
+ ):
167
+ """
168
+ Yield mini-batches of indices bucketed by size. Batches may contain
169
+ sequences of different lengths.
170
+
171
+ Args:
172
+ indices (List[int]): ordered list of dataset indices
173
+ num_tokens_fn (callable): function that returns the number of tokens at
174
+ a given index
175
+ max_tokens (int, optional): max number of tokens in each batch
176
+ (default: None).
177
+ max_sentences (int, optional): max number of sentences in each
178
+ batch (default: None).
179
+ required_batch_size_multiple (int, optional): require batch size to
180
+ be a multiple of N (default: 1).
181
+ """
182
+ max_tokens = max_tokens if max_tokens is not None else sys.maxsize
183
+ max_sentences = max_sentences if max_sentences is not None else sys.maxsize
184
+ bsz_mult = required_batch_size_multiple
185
+
186
+ if isinstance(indices, types.GeneratorType):
187
+ indices = np.fromiter(indices, dtype=np.int64, count=-1)
188
+
189
+ sample_len = 0
190
+ sample_lens = []
191
+ batch = []
192
+ batches = []
193
+ for i in range(len(indices)):
194
+ idx = indices[i]
195
+ num_tokens = num_tokens_fn(idx)
196
+ sample_lens.append(num_tokens)
197
+ sample_len = max(sample_len, num_tokens)
198
+
199
+ assert sample_len <= max_tokens, (
200
+ "sentence at index {} of size {} exceeds max_tokens "
201
+ "limit of {}!".format(idx, sample_len, max_tokens)
202
+ )
203
+ num_tokens = (len(batch) + 1) * sample_len
204
+
205
+ if _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
206
+ mod_len = max(
207
+ bsz_mult * (len(batch) // bsz_mult),
208
+ len(batch) % bsz_mult,
209
+ )
210
+ batches.append(batch[:mod_len])
211
+ batch = batch[mod_len:]
212
+ sample_lens = sample_lens[mod_len:]
213
+ sample_len = max(sample_lens) if len(sample_lens) > 0 else 0
214
+ batch.append(idx)
215
+ if len(batch) > 0:
216
+ batches.append(batch)
217
+ return batches
218
+
219
+ def unpack_dict_to_list(samples):
220
+ samples_ = []
221
+ bsz = samples.get('outputs').size(0)
222
+ for i in range(bsz):
223
+ res = {}
224
+ for k, v in samples.items():
225
+ try:
226
+ res[k] = v[i]
227
+ except:
228
+ pass
229
+ samples_.append(res)
230
+ return samples_
231
+
232
+
233
+ def remove_padding(x, padding_idx=0):
234
+ if x is None:
235
+ return None
236
+ assert len(x.shape) in [1, 2]
237
+ if len(x.shape) == 2: # [T, H]
238
+ return x[np.abs(x).sum(-1) != padding_idx]
239
+ elif len(x.shape) == 1: # [T]
240
+ return x[x != padding_idx]
241
+
242
+
243
+ class Timer:
244
+ timer_map = {}
245
+
246
+ def __init__(self, name, enable=False):
247
+ if name not in Timer.timer_map:
248
+ Timer.timer_map[name] = 0
249
+ self.name = name
250
+ self.enable = enable
251
+
252
+ def __enter__(self):
253
+ if self.enable:
254
+ if torch.cuda.is_available():
255
+ torch.cuda.synchronize()
256
+ self.t = time.time()
257
+
258
+ def __exit__(self, exc_type, exc_val, exc_tb):
259
+ if self.enable:
260
+ if torch.cuda.is_available():
261
+ torch.cuda.synchronize()
262
+ Timer.timer_map[self.name] += time.time() - self.t
263
+ if self.enable:
264
+ print(f'[Timer] {self.name}: {Timer.timer_map[self.name]}')
265
+
266
+
267
+ def print_arch(model, model_name='model'):
268
+ print(f"| {model_name} Arch: ", model)
269
+ num_params(model, model_name=model_name)
270
+
271
+
272
+ def num_params(model, print_out=True, model_name="model"):
273
+ parameters = filter(lambda p: p.requires_grad, model.parameters())
274
+ parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
275
+ if print_out:
276
+ print(f'| {model_name} Trainable Parameters: %.3fM' % parameters)
277
+ return parameters
278
+
279
+
280
+ def get_encoding(file):
281
+ with open(file, 'rb') as f:
282
+ encoding = chardet.detect(f.read())['encoding']
283
+ if encoding == 'GB2312':
284
+ encoding = 'GB18030'
285
+ return encoding
utils/audio.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import subprocess
2
+ import matplotlib
3
+
4
+ matplotlib.use('Agg')
5
+ import librosa
6
+ import librosa.filters
7
+ import numpy as np
8
+ from scipy import signal
9
+ from scipy.io import wavfile
10
+
11
+
12
+ def save_wav(wav, path, sr, norm=False):
13
+ if norm:
14
+ wav = wav / np.abs(wav).max()
15
+ wav *= 32767
16
+ # proposed by @dsmiller
17
+ wavfile.write(path, sr, wav.astype(np.int16))
18
+
19
+
20
+ def get_hop_size(hparams):
21
+ hop_size = hparams['hop_size']
22
+ if hop_size is None:
23
+ assert hparams['frame_shift_ms'] is not None
24
+ hop_size = int(hparams['frame_shift_ms'] / 1000 * hparams['audio_sample_rate'])
25
+ return hop_size
26
+
27
+
28
+ ###########################################################################################
29
+ def _stft(y, hparams):
30
+ return librosa.stft(y=y, n_fft=hparams['fft_size'], hop_length=get_hop_size(hparams),
31
+ win_length=hparams['win_size'], pad_mode='constant')
32
+
33
+
34
+ def _istft(y, hparams):
35
+ return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams['win_size'])
36
+
37
+
38
+ def librosa_pad_lr(x, fsize, fshift, pad_sides=1):
39
+ '''compute right padding (final frame) or both sides padding (first and final frames)
40
+ '''
41
+ assert pad_sides in (1, 2)
42
+ # return int(fsize // 2)
43
+ pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0]
44
+ if pad_sides == 1:
45
+ return 0, pad
46
+ else:
47
+ return pad // 2, pad // 2 + pad % 2
48
+
49
+
50
+ # Conversions
51
+ def amp_to_db(x):
52
+ return 20 * np.log10(np.maximum(1e-5, x))
53
+
54
+
55
+ def normalize(S, hparams):
56
+ return (S - hparams['min_level_db']) / -hparams['min_level_db']
utils/ckpt_utils.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import logging
3
+ import os
4
+ import re
5
+ import torch
6
+
7
+
8
+ def get_last_checkpoint(work_dir, steps=None):
9
+ checkpoint = None
10
+ last_ckpt_path = None
11
+ ckpt_paths = get_all_ckpts(work_dir, steps)
12
+ if len(ckpt_paths) > 0:
13
+ last_ckpt_path = ckpt_paths[0]
14
+ checkpoint = torch.load(last_ckpt_path, map_location='cpu')
15
+ logging.info(f'load module from checkpoint: {last_ckpt_path}')
16
+ return checkpoint, last_ckpt_path
17
+
18
+
19
+ def get_all_ckpts(work_dir, steps=None):
20
+ if steps is None:
21
+ ckpt_path_pattern = f'{work_dir}/model_ckpt_steps_*.ckpt'
22
+ else:
23
+ ckpt_path_pattern = f'{work_dir}/model_ckpt_steps_{steps}.ckpt'
24
+ return sorted(glob.glob(ckpt_path_pattern),
25
+ key=lambda x: -int(re.findall('.*steps\_(\d+)\.ckpt', x)[0]))
26
+
27
+
28
+ def load_ckpt(cur_model, ckpt_base_dir, model_name='model', force=True, strict=True):
29
+ if os.path.isfile(ckpt_base_dir):
30
+ base_dir = os.path.dirname(ckpt_base_dir)
31
+ ckpt_path = ckpt_base_dir
32
+ checkpoint = torch.load(ckpt_base_dir, map_location='cpu')
33
+ else:
34
+ base_dir = ckpt_base_dir
35
+ checkpoint, ckpt_path = get_last_checkpoint(ckpt_base_dir)
36
+ if checkpoint is not None:
37
+ state_dict = checkpoint["state_dict"]
38
+ if len([k for k in state_dict.keys() if '.' in k]) > 0:
39
+ state_dict = {k[len(model_name) + 1:]: v for k, v in state_dict.items()
40
+ if k.startswith(f'{model_name}.')}
41
+ else:
42
+ if '.' not in model_name:
43
+ state_dict = state_dict[model_name]
44
+ else:
45
+ base_model_name = model_name.split('.')[0]
46
+ rest_model_name = model_name[len(base_model_name) + 1:]
47
+ state_dict = {
48
+ k[len(rest_model_name) + 1:]: v for k, v in state_dict[base_model_name].items()
49
+ if k.startswith(f'{rest_model_name}.')}
50
+ if not strict:
51
+ cur_model_state_dict = cur_model.state_dict()
52
+ unmatched_keys = []
53
+ for key, param in state_dict.items():
54
+ if key in cur_model_state_dict:
55
+ new_param = cur_model_state_dict[key]
56
+ if new_param.shape != param.shape:
57
+ unmatched_keys.append(key)
58
+ print("| Unmatched keys: ", key, new_param.shape, param.shape)
59
+ for key in unmatched_keys:
60
+ del state_dict[key]
61
+ cur_model.load_state_dict(state_dict, strict=strict)
62
+ print(f"| load '{model_name}' from '{ckpt_path}'.")
63
+ else:
64
+ e_msg = f"| ckpt not found in {base_dir}."
65
+ if force:
66
+ assert False, e_msg
67
+ else:
68
+ print(e_msg)
utils/common_schedulers.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from utils.hparams import hparams
2
+
3
+
4
+ class NoneSchedule(object):
5
+ def __init__(self, optimizer):
6
+ super().__init__()
7
+ self.optimizer = optimizer
8
+ self.constant_lr = hparams['lr']
9
+ self.step(0)
10
+
11
+ def step(self, num_updates):
12
+ self.lr = self.constant_lr
13
+ for param_group in self.optimizer.param_groups:
14
+ param_group['lr'] = self.lr
15
+ return self.lr
16
+
17
+ def get_lr(self):
18
+ return self.optimizer.param_groups[0]['lr']
19
+
20
+ def get_last_lr(self):
21
+ return self.get_lr()
22
+
23
+
24
+ class RSQRTSchedule(object):
25
+ def __init__(self, optimizer):
26
+ super().__init__()
27
+ self.optimizer = optimizer
28
+ self.constant_lr = hparams['lr']
29
+ self.warmup_updates = hparams['warmup_updates']
30
+ self.hidden_size = hparams['hidden_size']
31
+ self.lr = hparams['lr']
32
+ for param_group in optimizer.param_groups:
33
+ param_group['lr'] = self.lr
34
+ self.step(0)
35
+
36
+ def step(self, num_updates):
37
+ constant_lr = self.constant_lr
38
+ warmup = min(num_updates / self.warmup_updates, 1.0)
39
+ rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5
40
+ rsqrt_hidden = self.hidden_size ** -0.5
41
+ self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7)
42
+ for param_group in self.optimizer.param_groups:
43
+ param_group['lr'] = self.lr
44
+ return self.lr
45
+
46
+ def get_lr(self):
47
+ return self.optimizer.param_groups[0]['lr']
48
+
49
+ def get_last_lr(self):
50
+ return self.get_lr()
utils/cwt.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import librosa
2
+ import numpy as np
3
+ from pycwt import wavelet
4
+ from scipy.interpolate import interp1d
5
+
6
+
7
+ def load_wav(wav_file, sr):
8
+ wav, _ = librosa.load(wav_file, sr=sr, mono=True)
9
+ return wav
10
+
11
+
12
+ def convert_continuos_f0(f0):
13
+ '''CONVERT F0 TO CONTINUOUS F0
14
+ Args:
15
+ f0 (ndarray): original f0 sequence with the shape (T)
16
+ Return:
17
+ (ndarray): continuous f0 with the shape (T)
18
+ '''
19
+ # get uv information as binary
20
+ f0 = np.copy(f0)
21
+ uv = np.float32(f0 != 0)
22
+
23
+ # get start and end of f0
24
+ if (f0 == 0).all():
25
+ print("| all of the f0 values are 0.")
26
+ return uv, f0
27
+ start_f0 = f0[f0 != 0][0]
28
+ end_f0 = f0[f0 != 0][-1]
29
+
30
+ # padding start and end of f0 sequence
31
+ start_idx = np.where(f0 == start_f0)[0][0]
32
+ end_idx = np.where(f0 == end_f0)[0][-1]
33
+ f0[:start_idx] = start_f0
34
+ f0[end_idx:] = end_f0
35
+
36
+ # get non-zero frame index
37
+ nz_frames = np.where(f0 != 0)[0]
38
+
39
+ # perform linear interpolation
40
+ f = interp1d(nz_frames, f0[nz_frames])
41
+ cont_f0 = f(np.arange(0, f0.shape[0]))
42
+
43
+ return uv, cont_f0
44
+
45
+
46
+ def get_cont_lf0(f0, frame_period=5.0):
47
+ uv, cont_f0_lpf = convert_continuos_f0(f0)
48
+ # cont_f0_lpf = low_pass_filter(cont_f0_lpf, int(1.0 / (frame_period * 0.001)), cutoff=20)
49
+ cont_lf0_lpf = np.log(cont_f0_lpf)
50
+ return uv, cont_lf0_lpf
51
+
52
+
53
+ def get_lf0_cwt(lf0):
54
+ '''
55
+ input:
56
+ signal of shape (N)
57
+ output:
58
+ Wavelet_lf0 of shape(10, N), scales of shape(10)
59
+ '''
60
+ mother = wavelet.MexicanHat()
61
+ dt = 0.005
62
+ dj = 1
63
+ s0 = dt * 2
64
+ J = 9
65
+
66
+ Wavelet_lf0, scales, _, _, _, _ = wavelet.cwt(np.squeeze(lf0), dt, dj, s0, J, mother)
67
+ # Wavelet.shape => (J + 1, len(lf0))
68
+ Wavelet_lf0 = np.real(Wavelet_lf0).T
69
+ return Wavelet_lf0, scales
70
+
71
+
72
+ def norm_scale(Wavelet_lf0):
73
+ Wavelet_lf0_norm = np.zeros((Wavelet_lf0.shape[0], Wavelet_lf0.shape[1]))
74
+ mean = Wavelet_lf0.mean(0)[None, :]
75
+ std = Wavelet_lf0.std(0)[None, :]
76
+ Wavelet_lf0_norm = (Wavelet_lf0 - mean) / std
77
+ return Wavelet_lf0_norm, mean, std
78
+
79
+
80
+ def normalize_cwt_lf0(f0, mean, std):
81
+ uv, cont_lf0_lpf = get_cont_lf0(f0)
82
+ cont_lf0_norm = (cont_lf0_lpf - mean) / std
83
+ Wavelet_lf0, scales = get_lf0_cwt(cont_lf0_norm)
84
+ Wavelet_lf0_norm, _, _ = norm_scale(Wavelet_lf0)
85
+
86
+ return Wavelet_lf0_norm
87
+
88
+
89
+ def get_lf0_cwt_norm(f0s, mean, std):
90
+ uvs = list()
91
+ cont_lf0_lpfs = list()
92
+ cont_lf0_lpf_norms = list()
93
+ Wavelet_lf0s = list()
94
+ Wavelet_lf0s_norm = list()
95
+ scaless = list()
96
+
97
+ means = list()
98
+ stds = list()
99
+ for f0 in f0s:
100
+ uv, cont_lf0_lpf = get_cont_lf0(f0)
101
+ cont_lf0_lpf_norm = (cont_lf0_lpf - mean) / std
102
+
103
+ Wavelet_lf0, scales = get_lf0_cwt(cont_lf0_lpf_norm) # [560,10]
104
+ Wavelet_lf0_norm, mean_scale, std_scale = norm_scale(Wavelet_lf0) # [560,10],[1,10],[1,10]
105
+
106
+ Wavelet_lf0s_norm.append(Wavelet_lf0_norm)
107
+ uvs.append(uv)
108
+ cont_lf0_lpfs.append(cont_lf0_lpf)
109
+ cont_lf0_lpf_norms.append(cont_lf0_lpf_norm)
110
+ Wavelet_lf0s.append(Wavelet_lf0)
111
+ scaless.append(scales)
112
+ means.append(mean_scale)
113
+ stds.append(std_scale)
114
+
115
+ return Wavelet_lf0s_norm, scaless, means, stds
116
+
117
+
118
+ def inverse_cwt_torch(Wavelet_lf0, scales):
119
+ import torch
120
+ b = ((torch.arange(0, len(scales)).float().to(Wavelet_lf0.device)[None, None, :] + 1 + 2.5) ** (-2.5))
121
+ lf0_rec = Wavelet_lf0 * b
122
+ lf0_rec_sum = lf0_rec.sum(-1)
123
+ lf0_rec_sum = (lf0_rec_sum - lf0_rec_sum.mean(-1, keepdim=True)) / lf0_rec_sum.std(-1, keepdim=True)
124
+ return lf0_rec_sum
125
+
126
+
127
+ def inverse_cwt(Wavelet_lf0, scales):
128
+ b = ((np.arange(0, len(scales))[None, None, :] + 1 + 2.5) ** (-2.5))
129
+ lf0_rec = Wavelet_lf0 * b
130
+ lf0_rec_sum = lf0_rec.sum(-1)
131
+ lf0_rec_sum = (lf0_rec_sum - lf0_rec_sum.mean(-1, keepdims=True)) / lf0_rec_sum.std(-1, keepdims=True)
132
+ return lf0_rec_sum
133
+
134
+
135
+ def cwt2f0(cwt_spec, mean, std, cwt_scales):
136
+ assert len(mean.shape) == 1 and len(std.shape) == 1 and len(cwt_spec.shape) == 3
137
+ import torch
138
+ if isinstance(cwt_spec, torch.Tensor):
139
+ f0 = inverse_cwt_torch(cwt_spec, cwt_scales)
140
+ f0 = f0 * std[:, None] + mean[:, None]
141
+ f0 = f0.exp() # [B, T]
142
+ else:
143
+ f0 = inverse_cwt(cwt_spec, cwt_scales)
144
+ f0 = f0 * std[:, None] + mean[:, None]
145
+ f0 = np.exp(f0) # [B, T]
146
+ return f0
utils/ddp_utils.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.nn.parallel import DistributedDataParallel
2
+ from torch.nn.parallel.distributed import _find_tensors
3
+ import torch.optim
4
+ import torch.utils.data
5
+ import torch
6
+ from packaging import version
7
+
8
+ class DDP(DistributedDataParallel):
9
+ """
10
+ Override the forward call in lightning so it goes to training and validation step respectively
11
+ """
12
+
13
+ def forward(self, *inputs, **kwargs): # pragma: no cover
14
+ if version.parse(torch.__version__[:6]) < version.parse("1.11"):
15
+ self._sync_params()
16
+ inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
17
+ assert len(self.device_ids) == 1
18
+ if self.module.training:
19
+ output = self.module.training_step(*inputs[0], **kwargs[0])
20
+ elif self.module.testing:
21
+ output = self.module.test_step(*inputs[0], **kwargs[0])
22
+ else:
23
+ output = self.module.validation_step(*inputs[0], **kwargs[0])
24
+ if torch.is_grad_enabled():
25
+ # We'll return the output object verbatim since it is a freeform
26
+ # object. We need to find any tensors in this object, though,
27
+ # because we need to figure out which parameters were used during
28
+ # this forward pass, to ensure we short circuit reduction for any
29
+ # unused parameters. Only if `find_unused_parameters` is set.
30
+ if self.find_unused_parameters:
31
+ self.reducer.prepare_for_backward(list(_find_tensors(output)))
32
+ else:
33
+ self.reducer.prepare_for_backward([])
34
+ else:
35
+ from torch.nn.parallel.distributed import \
36
+ logging, Join, _DDPSink, _tree_flatten_with_rref, _tree_unflatten_with_rref
37
+ with torch.autograd.profiler.record_function("DistributedDataParallel.forward"):
38
+ if torch.is_grad_enabled() and self.require_backward_grad_sync:
39
+ self.logger.set_runtime_stats_and_log()
40
+ self.num_iterations += 1
41
+ self.reducer.prepare_for_forward()
42
+
43
+ # Notify the join context that this process has not joined, if
44
+ # needed
45
+ work = Join.notify_join_context(self)
46
+ if work:
47
+ self.reducer._set_forward_pass_work_handle(
48
+ work, self._divide_by_initial_world_size
49
+ )
50
+
51
+ # Calling _rebuild_buckets before forward compuation,
52
+ # It may allocate new buckets before deallocating old buckets
53
+ # inside _rebuild_buckets. To save peak memory usage,
54
+ # call _rebuild_buckets before the peak memory usage increases
55
+ # during forward computation.
56
+ # This should be called only once during whole training period.
57
+ if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
58
+ logging.info("Reducer buckets have been rebuilt in this iteration.")
59
+ self._has_rebuilt_buckets = True
60
+
61
+ # sync params according to location (before/after forward) user
62
+ # specified as part of hook, if hook was specified.
63
+ buffer_hook_registered = hasattr(self, 'buffer_hook')
64
+ if self._check_sync_bufs_pre_fwd():
65
+ self._sync_buffers()
66
+
67
+ if self._join_config.enable:
68
+ # Notify joined ranks whether they should sync in backwards pass or not.
69
+ self._check_global_requires_backward_grad_sync(is_joined_rank=False)
70
+
71
+ inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
72
+ if self.module.training:
73
+ output = self.module.training_step(*inputs[0], **kwargs[0])
74
+ elif self.module.testing:
75
+ output = self.module.test_step(*inputs[0], **kwargs[0])
76
+ else:
77
+ output = self.module.validation_step(*inputs[0], **kwargs[0])
78
+
79
+ # sync params according to location (before/after forward) user
80
+ # specified as part of hook, if hook was specified.
81
+ if self._check_sync_bufs_post_fwd():
82
+ self._sync_buffers()
83
+
84
+ if torch.is_grad_enabled() and self.require_backward_grad_sync:
85
+ self.require_forward_param_sync = True
86
+ # We'll return the output object verbatim since it is a freeform
87
+ # object. We need to find any tensors in this object, though,
88
+ # because we need to figure out which parameters were used during
89
+ # this forward pass, to ensure we short circuit reduction for any
90
+ # unused parameters. Only if `find_unused_parameters` is set.
91
+ if self.find_unused_parameters and not self.static_graph:
92
+ # Do not need to populate this for static graph.
93
+ self.reducer.prepare_for_backward(list(_find_tensors(output)))
94
+ else:
95
+ self.reducer.prepare_for_backward([])
96
+ else:
97
+ self.require_forward_param_sync = False
98
+
99
+ # TODO: DDPSink is currently enabled for unused parameter detection and
100
+ # static graph training for first iteration.
101
+ if (self.find_unused_parameters and not self.static_graph) or (
102
+ self.static_graph and self.num_iterations == 1
103
+ ):
104
+ state_dict = {
105
+ 'static_graph': self.static_graph,
106
+ 'num_iterations': self.num_iterations,
107
+ }
108
+
109
+ output_tensor_list, treespec, output_is_rref = _tree_flatten_with_rref(
110
+ output
111
+ )
112
+ output_placeholders = [None for _ in range(len(output_tensor_list))]
113
+ # Do not touch tensors that have no grad_fn, which can cause issues
114
+ # such as https://github.com/pytorch/pytorch/issues/60733
115
+ for i, output in enumerate(output_tensor_list):
116
+ if torch.is_tensor(output) and output.grad_fn is None:
117
+ output_placeholders[i] = output
118
+
119
+ # When find_unused_parameters=True, makes tensors which require grad
120
+ # run through the DDPSink backward pass. When not all outputs are
121
+ # used in loss, this makes those corresponding tensors receive
122
+ # undefined gradient which the reducer then handles to ensure
123
+ # param.grad field is not touched and we don't error out.
124
+ passthrough_tensor_list = _DDPSink.apply(
125
+ self.reducer,
126
+ state_dict,
127
+ *output_tensor_list,
128
+ )
129
+ for i in range(len(output_placeholders)):
130
+ if output_placeholders[i] is None:
131
+ output_placeholders[i] = passthrough_tensor_list[i]
132
+
133
+ # Reconstruct output data structure.
134
+ output = _tree_unflatten_with_rref(
135
+ output_placeholders, treespec, output_is_rref
136
+ )
137
+ return output
utils/hparams.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ import subprocess
4
+
5
+ import yaml
6
+
7
+ global_print_hparams = True
8
+ hparams = {}
9
+
10
+
11
+ class Args:
12
+ def __init__(self, **kwargs):
13
+ for k, v in kwargs.items():
14
+ self.__setattr__(k, v)
15
+
16
+
17
+ def override_config(old_config: dict, new_config: dict):
18
+ for k, v in new_config.items():
19
+ if isinstance(v, dict) and k in old_config:
20
+ override_config(old_config[k], new_config[k])
21
+ else:
22
+ old_config[k] = v
23
+
24
+
25
+ def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True):
26
+ if config == '' and exp_name == '':
27
+ parser = argparse.ArgumentParser(description='')
28
+ parser.add_argument('--config', type=str, default='configs/config_base.yaml',
29
+ help='location of the data corpus')
30
+ parser.add_argument('--exp_name', type=str, default='', help='exp_name')
31
+ parser.add_argument('--hparams', type=str, default='',
32
+ help='location of the data corpus')
33
+ parser.add_argument('--infer', action='store_true', help='infer')
34
+ parser.add_argument('--validate', action='store_true', help='validate')
35
+ parser.add_argument('--reset', action='store_true', help='reset hparams')
36
+ parser.add_argument('--remove', action='store_true', help='remove old ckpt')
37
+ parser.add_argument('--debug', action='store_true', help='debug')
38
+ args, unknown = parser.parse_known_args()
39
+ else:
40
+ args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
41
+ infer=False, validate=False, reset=False, debug=False)
42
+ global hparams
43
+ assert args.config != '' or args.exp_name != ''
44
+
45
+ config_chains = []
46
+ loaded_config = set()
47
+
48
+ def load_config(config_fn): # deep first
49
+ if not os.path.exists(config_fn):
50
+ return {}
51
+ with open(config_fn) as f:
52
+ hparams_ = yaml.safe_load(f)
53
+ loaded_config.add(config_fn)
54
+ if 'base_config' in hparams_:
55
+ ret_hparams = {}
56
+ if not isinstance(hparams_['base_config'], list):
57
+ hparams_['base_config'] = [hparams_['base_config']]
58
+ for c in hparams_['base_config']:
59
+ if c.startswith('.'):
60
+ c = f'{os.path.dirname(config_fn)}/{c}'
61
+ c = os.path.normpath(c)
62
+ if c not in loaded_config:
63
+ override_config(ret_hparams, load_config(c))
64
+ override_config(ret_hparams, hparams_)
65
+ else:
66
+ ret_hparams = hparams_
67
+ config_chains.append(config_fn)
68
+ return ret_hparams
69
+
70
+ saved_hparams = {}
71
+ args_work_dir = ''
72
+ if args.exp_name != '':
73
+ args_work_dir = f'checkpoints/{args.exp_name}'
74
+ ckpt_config_path = f'{args_work_dir}/config.yaml'
75
+ if os.path.exists(ckpt_config_path):
76
+ with open(ckpt_config_path) as f:
77
+ saved_hparams.update(yaml.safe_load(f))
78
+ hparams_ = {}
79
+ if args.config != '':
80
+ hparams_.update(load_config(args.config))
81
+ if not args.reset:
82
+ hparams_.update(saved_hparams)
83
+ hparams_['work_dir'] = args_work_dir
84
+
85
+ # --hparams="a=1,b.c=2,d=[1 1 1]"
86
+ if args.hparams != "":
87
+ for new_hparam in args.hparams.split(","):
88
+ k, v = new_hparam.split("=")
89
+ v = v.strip("\'\" ")
90
+ config_node = hparams_
91
+ for k_ in k.split(".")[:-1]:
92
+ config_node = config_node[k_]
93
+ k = k.split(".")[-1]
94
+ if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]:
95
+ if type(config_node[k]) == list:
96
+ v = v.replace(" ", ",")
97
+ config_node[k] = eval(v)
98
+ else:
99
+ config_node[k] = type(config_node[k])(v)
100
+ if args_work_dir != '' and args.remove:
101
+ answer = input("REMOVE old checkpoint? Y/N [Default: N]: ")
102
+ if answer.lower() == "y":
103
+ subprocess.check_call(f'rm -rf {args_work_dir}', shell=True)
104
+ if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
105
+ os.makedirs(hparams_['work_dir'], exist_ok=True)
106
+ with open(ckpt_config_path, 'w') as f:
107
+ yaml.safe_dump(hparams_, f)
108
+
109
+ hparams_['infer'] = args.infer
110
+ hparams_['debug'] = args.debug
111
+ hparams_['validate'] = args.validate
112
+ hparams_['exp_name'] = args.exp_name
113
+ global global_print_hparams
114
+ if global_hparams:
115
+ hparams.clear()
116
+ hparams.update(hparams_)
117
+ if print_hparams and global_print_hparams and global_hparams:
118
+ print('| Hparams chains: ', config_chains)
119
+ print('| Hparams: ')
120
+ for i, (k, v) in enumerate(sorted(hparams_.items())):
121
+ print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
122
+ print("")
123
+ global_print_hparams = False
124
+ return hparams_
utils/indexed_datasets.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+ from copy import deepcopy
3
+
4
+ import numpy as np
5
+
6
+
7
+ class IndexedDataset:
8
+ def __init__(self, path, num_cache=1):
9
+ super().__init__()
10
+ self.path = path
11
+ self.data_file = None
12
+ self.data_offsets = np.load(f"{path}.idx", allow_pickle=True).item()['offsets']
13
+ self.data_file = open(f"{path}.data", 'rb', buffering=-1)
14
+ self.cache = []
15
+ self.num_cache = num_cache
16
+
17
+ def check_index(self, i):
18
+ if i < 0 or i >= len(self.data_offsets) - 1:
19
+ raise IndexError('index out of range')
20
+
21
+ def __del__(self):
22
+ if self.data_file:
23
+ self.data_file.close()
24
+
25
+ def __getitem__(self, i):
26
+ self.check_index(i)
27
+ if self.num_cache > 0:
28
+ for c in self.cache:
29
+ if c[0] == i:
30
+ return c[1]
31
+ self.data_file.seek(self.data_offsets[i])
32
+ b = self.data_file.read(self.data_offsets[i + 1] - self.data_offsets[i])
33
+ item = pickle.loads(b)
34
+ if self.num_cache > 0:
35
+ self.cache = [(i, deepcopy(item))] + self.cache[:-1]
36
+ return item
37
+
38
+ def __len__(self):
39
+ return len(self.data_offsets) - 1
40
+
41
+ class IndexedDatasetBuilder:
42
+ def __init__(self, path):
43
+ self.path = path
44
+ self.out_file = open(f"{path}.data", 'wb')
45
+ self.byte_offsets = [0]
46
+
47
+ def add_item(self, item):
48
+ s = pickle.dumps(item)
49
+ bytes = self.out_file.write(s)
50
+ self.byte_offsets.append(self.byte_offsets[-1] + bytes)
51
+
52
+ def finalize(self):
53
+ self.out_file.close()
54
+ np.save(open(f"{self.path}.idx", 'wb'), {'offsets': self.byte_offsets})
55
+
56
+
57
+ if __name__ == "__main__":
58
+ import random
59
+ from tqdm import tqdm
60
+ ds_path = '/tmp/indexed_ds_example'
61
+ size = 100
62
+ items = [{"a": np.random.normal(size=[10000, 10]),
63
+ "b": np.random.normal(size=[10000, 10])} for i in range(size)]
64
+ builder = IndexedDatasetBuilder(ds_path)
65
+ for i in tqdm(range(size)):
66
+ builder.add_item(items[i])
67
+ builder.finalize()
68
+ ds = IndexedDataset(ds_path)
69
+ for i in tqdm(range(10000)):
70
+ idx = random.randint(0, size - 1)
71
+ assert (ds[idx]['a'] == items[idx]['a']).all()
utils/multiprocess_utils.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import traceback
3
+ from functools import partial
4
+ from tqdm import tqdm
5
+
6
+
7
+ def chunked_worker(worker_id, args_queue=None, results_queue=None, init_ctx_func=None):
8
+ ctx = init_ctx_func(worker_id) if init_ctx_func is not None else None
9
+ while True:
10
+ args = args_queue.get()
11
+ if args == '<KILL>':
12
+ return
13
+ job_idx, map_func, arg = args
14
+ try:
15
+ map_func_ = partial(map_func, ctx=ctx) if ctx is not None else map_func
16
+ if isinstance(arg, dict):
17
+ res = map_func_(**arg)
18
+ elif isinstance(arg, (list, tuple)):
19
+ res = map_func_(*arg)
20
+ else:
21
+ res = map_func_(arg)
22
+ results_queue.put((job_idx, res))
23
+ except:
24
+ traceback.print_exc()
25
+ results_queue.put((job_idx, None))
26
+
27
+
28
+ class MultiprocessManager:
29
+ def __init__(self, num_workers=None, init_ctx_func=None, multithread=False):
30
+ if multithread:
31
+ from multiprocessing.dummy import Queue, Process
32
+ else:
33
+ from multiprocessing import Queue, Process
34
+ if num_workers is None:
35
+ num_workers = int(os.getenv('N_PROC', os.cpu_count()))
36
+ self.num_workers = num_workers
37
+ self.results_queue = Queue(maxsize=-1)
38
+ self.args_queue = Queue(maxsize=-1)
39
+ self.workers = []
40
+ self.total_jobs = 0
41
+ for i in range(num_workers):
42
+ p = Process(target=chunked_worker,
43
+ args=(i, self.args_queue, self.results_queue, init_ctx_func),
44
+ daemon=True)
45
+ self.workers.append(p)
46
+ p.start()
47
+
48
+ def add_job(self, func, args):
49
+ self.args_queue.put((self.total_jobs, func, args))
50
+ self.total_jobs += 1
51
+
52
+ def get_results(self):
53
+ for w in range(self.num_workers):
54
+ self.args_queue.put("<KILL>")
55
+ self.n_finished = 0
56
+ while self.n_finished < self.total_jobs:
57
+ job_id, res = self.results_queue.get()
58
+ yield job_id, res
59
+ self.n_finished += 1
60
+ for w in self.workers:
61
+ w.join()
62
+
63
+ def __len__(self):
64
+ return self.total_jobs
65
+
66
+
67
+ def multiprocess_run_tqdm(map_func, args, num_workers=None, ordered=True, init_ctx_func=None,
68
+ multithread=False, desc=None):
69
+ for i, res in tqdm(enumerate(
70
+ multiprocess_run(map_func, args, num_workers, ordered, init_ctx_func, multithread)),
71
+ total=len(args), desc=desc):
72
+ yield i, res
73
+
74
+
75
+ def multiprocess_run(map_func, args, num_workers=None, ordered=True, init_ctx_func=None, multithread=False):
76
+ """
77
+ Multiprocessing running chunked jobs.
78
+ Examples:
79
+ >>> for res in tqdm(multiprocess_run(job_func, args):
80
+ >>> print(res)
81
+ :param map_func:
82
+ :param args:
83
+ :param num_workers:
84
+ :param ordered:
85
+ :param init_ctx_func:
86
+ :param q_max_size:
87
+ :param multithread:
88
+ :return:
89
+ """
90
+ if num_workers is None:
91
+ num_workers = int(os.getenv('N_PROC', os.cpu_count()))
92
+ manager = MultiprocessManager(num_workers, init_ctx_func, multithread)
93
+ for arg in args:
94
+ manager.add_job(map_func, arg)
95
+ if ordered:
96
+ n_jobs = len(args)
97
+ results = ['<WAIT>' for _ in range(n_jobs)]
98
+ i_now = 0
99
+ for job_i, res in manager.get_results():
100
+ results[job_i] = res
101
+ while i_now < n_jobs and (not isinstance(results[i_now], str) or results[i_now] != '<WAIT>'):
102
+ yield results[i_now]
103
+ i_now += 1
104
+ else:
105
+ for res in manager.get_results():
106
+ yield res
107
+
108
+
109
+ def chunked_multiprocess_run(
110
+ map_func, args, num_workers=None, ordered=True,
111
+ init_ctx_func=None, q_max_size=1000, multithread=False):
112
+ if multithread:
113
+ from multiprocessing.dummy import Queue, Process
114
+ else:
115
+ from multiprocessing import Queue, Process
116
+ args = zip(range(len(args)), args)
117
+ args = list(args)
118
+ n_jobs = len(args)
119
+ if num_workers is None:
120
+ num_workers = int(os.getenv('N_PROC', os.cpu_count()))
121
+ results_queues = []
122
+ if ordered:
123
+ for i in range(num_workers):
124
+ results_queues.append(Queue(maxsize=q_max_size // num_workers))
125
+ else:
126
+ results_queue = Queue(maxsize=q_max_size)
127
+ for i in range(num_workers):
128
+ results_queues.append(results_queue)
129
+ workers = []
130
+ for i in range(num_workers):
131
+ args_worker = args[i::num_workers]
132
+ p = Process(target=chunked_worker, args=(
133
+ i, map_func, args_worker, results_queues[i], init_ctx_func), daemon=True)
134
+ workers.append(p)
135
+ p.start()
136
+ for n_finished in range(n_jobs):
137
+ results_queue = results_queues[n_finished % num_workers]
138
+ job_idx, res = results_queue.get()
139
+ assert job_idx == n_finished or not ordered, (job_idx, n_finished)
140
+ yield res
141
+ for w in workers:
142
+ w.join()
143
+
utils/os_utils.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import subprocess
3
+
4
+
5
+ def link_file(from_file, to_file):
6
+ subprocess.check_call(
7
+ f'ln -s "`realpath --relative-to="{os.path.dirname(to_file)}" "{from_file}"`" "{to_file}"', shell=True)
8
+
9
+
10
+ def move_file(from_file, to_file):
11
+ subprocess.check_call(f'mv "{from_file}" "{to_file}"', shell=True)
12
+
13
+
14
+ def copy_file(from_file, to_file):
15
+ subprocess.check_call(f'cp -r "{from_file}" "{to_file}"', shell=True)
16
+
17
+
18
+ def remove_file(*fns):
19
+ for f in fns:
20
+ subprocess.check_call(f'rm -rf "{f}"', shell=True)
utils/pitch_utils.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #########
2
+ # world
3
+ ##########
4
+ import librosa
5
+ import numpy as np
6
+ import torch
7
+
8
+ gamma = 0
9
+ mcepInput = 3 # 0 for dB, 3 for magnitude
10
+ alpha = 0.45
11
+ en_floor = 10 ** (-80 / 20)
12
+ FFT_SIZE = 2048
13
+
14
+
15
+ f0_bin = 256
16
+ f0_max = 1100.0
17
+ f0_min = 50.0
18
+ f0_mel_min = 1127 * np.log(1 + f0_min / 700)
19
+ f0_mel_max = 1127 * np.log(1 + f0_max / 700)
20
+
21
+
22
+ def f0_to_coarse(f0):
23
+ is_torch = isinstance(f0, torch.Tensor)
24
+ f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
25
+ f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
26
+
27
+ f0_mel[f0_mel <= 1] = 1
28
+ f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
29
+ f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int)
30
+ assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
31
+ return f0_coarse
32
+
33
+
34
+ def norm_f0(f0, uv, hparams):
35
+ is_torch = isinstance(f0, torch.Tensor)
36
+ if hparams['pitch_norm'] == 'standard':
37
+ f0 = (f0 - hparams['f0_mean']) / hparams['f0_std']
38
+ if hparams['pitch_norm'] == 'log':
39
+ f0 = torch.log2(f0) if is_torch else np.log2(f0)
40
+ if uv is not None and hparams['use_uv']:
41
+ f0[uv > 0] = 0
42
+ return f0
43
+
44
+
45
+ def norm_interp_f0(f0, hparams):
46
+ is_torch = isinstance(f0, torch.Tensor)
47
+ if is_torch:
48
+ device = f0.device
49
+ f0 = f0.data.cpu().numpy()
50
+ uv = f0 == 0
51
+ f0 = norm_f0(f0, uv, hparams)
52
+ if sum(uv) == len(f0):
53
+ f0[uv] = 0
54
+ elif sum(uv) > 0:
55
+ f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv])
56
+ uv = torch.FloatTensor(uv)
57
+ f0 = torch.FloatTensor(f0)
58
+ if is_torch:
59
+ f0 = f0.to(device)
60
+ return f0, uv
61
+
62
+
63
+ def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None):
64
+ if hparams['pitch_norm'] == 'standard':
65
+ f0 = f0 * hparams['f0_std'] + hparams['f0_mean']
66
+ if hparams['pitch_norm'] == 'log':
67
+ f0 = 2 ** f0
68
+ if min is not None:
69
+ f0 = f0.clamp(min=min)
70
+ if max is not None:
71
+ f0 = f0.clamp(max=max)
72
+ if uv is not None and hparams['use_uv']:
73
+ f0[uv > 0] = 0
74
+ if pitch_padding is not None:
75
+ f0[pitch_padding] = 0
76
+ return f0
utils/pl_utils.py ADDED
@@ -0,0 +1,1618 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import matplotlib
2
+ from torch.nn import DataParallel
3
+ from torch.nn.parallel import DistributedDataParallel
4
+
5
+ matplotlib.use('Agg')
6
+ import glob
7
+ import itertools
8
+ import subprocess
9
+ import threading
10
+ import traceback
11
+
12
+ from pytorch_lightning.callbacks import GradientAccumulationScheduler
13
+ from pytorch_lightning.callbacks import ModelCheckpoint
14
+
15
+ from functools import wraps
16
+ from torch.cuda._utils import _get_device_index
17
+ import numpy as np
18
+ import torch.optim
19
+ import torch.utils.data
20
+ import copy
21
+ import logging
22
+ import os
23
+ import re
24
+ import sys
25
+ import torch
26
+ import torch.distributed as dist
27
+ import torch.multiprocessing as mp
28
+ import tqdm
29
+ from torch.optim.optimizer import Optimizer
30
+
31
+
32
+ def get_a_var(obj): # pragma: no cover
33
+ if isinstance(obj, torch.Tensor):
34
+ return obj
35
+
36
+ if isinstance(obj, list) or isinstance(obj, tuple):
37
+ for result in map(get_a_var, obj):
38
+ if isinstance(result, torch.Tensor):
39
+ return result
40
+ if isinstance(obj, dict):
41
+ for result in map(get_a_var, obj.items()):
42
+ if isinstance(result, torch.Tensor):
43
+ return result
44
+ return None
45
+
46
+
47
+ def data_loader(fn):
48
+ """
49
+ Decorator to make any fx with this use the lazy property
50
+ :param fn:
51
+ :return:
52
+ """
53
+
54
+ wraps(fn)
55
+ attr_name = '_lazy_' + fn.__name__
56
+
57
+ def _get_data_loader(self):
58
+ try:
59
+ value = getattr(self, attr_name)
60
+ except AttributeError:
61
+ try:
62
+ value = fn(self) # Lazy evaluation, done only once.
63
+ if (
64
+ value is not None and
65
+ not isinstance(value, list) and
66
+ fn.__name__ in ['test_dataloader', 'val_dataloader']
67
+ ):
68
+ value = [value]
69
+ except AttributeError as e:
70
+ # Guard against AttributeError suppression. (Issue #142)
71
+ traceback.print_exc()
72
+ error = f'{fn.__name__}: An AttributeError was encountered: ' + str(e)
73
+ raise RuntimeError(error) from e
74
+ setattr(self, attr_name, value) # Memoize evaluation.
75
+ return value
76
+
77
+ return _get_data_loader
78
+
79
+
80
+ def parallel_apply(modules, inputs, kwargs_tup=None, devices=None): # pragma: no cover
81
+ r"""Applies each `module` in :attr:`modules` in parallel on arguments
82
+ contained in :attr:`inputs` (positional) and :attr:`kwargs_tup` (keyword)
83
+ on each of :attr:`devices`.
84
+
85
+ Args:
86
+ modules (Module): modules to be parallelized
87
+ inputs (tensor): inputs to the modules
88
+ devices (list of int or torch.device): CUDA devices
89
+
90
+ :attr:`modules`, :attr:`inputs`, :attr:`kwargs_tup` (if given), and
91
+ :attr:`devices` (if given) should all have same length. Moreover, each
92
+ element of :attr:`inputs` can either be a single object as the only argument
93
+ to a module, or a collection of positional arguments.
94
+ """
95
+ assert len(modules) == len(inputs)
96
+ if kwargs_tup is not None:
97
+ assert len(modules) == len(kwargs_tup)
98
+ else:
99
+ kwargs_tup = ({},) * len(modules)
100
+ if devices is not None:
101
+ assert len(modules) == len(devices)
102
+ else:
103
+ devices = [None] * len(modules)
104
+ devices = list(map(lambda x: _get_device_index(x, True), devices))
105
+ lock = threading.Lock()
106
+ results = {}
107
+ grad_enabled = torch.is_grad_enabled()
108
+
109
+ def _worker(i, module, input, kwargs, device=None):
110
+ torch.set_grad_enabled(grad_enabled)
111
+ if device is None:
112
+ device = get_a_var(input).get_device()
113
+ try:
114
+ with torch.cuda.device(device):
115
+ # this also avoids accidental slicing of `input` if it is a Tensor
116
+ if not isinstance(input, (list, tuple)):
117
+ input = (input,)
118
+
119
+ # ---------------
120
+ # CHANGE
121
+ if module.training:
122
+ output = module.training_step(*input, **kwargs)
123
+
124
+ elif module.testing:
125
+ output = module.test_step(*input, **kwargs)
126
+
127
+ else:
128
+ output = module.validation_step(*input, **kwargs)
129
+ # ---------------
130
+
131
+ with lock:
132
+ results[i] = output
133
+ except Exception as e:
134
+ with lock:
135
+ results[i] = e
136
+
137
+ # make sure each module knows what training state it's in...
138
+ # fixes weird bug where copies are out of sync
139
+ root_m = modules[0]
140
+ for m in modules[1:]:
141
+ m.training = root_m.training
142
+ m.testing = root_m.testing
143
+
144
+ if len(modules) > 1:
145
+ threads = [threading.Thread(target=_worker,
146
+ args=(i, module, input, kwargs, device))
147
+ for i, (module, input, kwargs, device) in
148
+ enumerate(zip(modules, inputs, kwargs_tup, devices))]
149
+
150
+ for thread in threads:
151
+ thread.start()
152
+ for thread in threads:
153
+ thread.join()
154
+ else:
155
+ _worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0])
156
+
157
+ outputs = []
158
+ for i in range(len(inputs)):
159
+ output = results[i]
160
+ if isinstance(output, Exception):
161
+ raise output
162
+ outputs.append(output)
163
+ return outputs
164
+
165
+
166
+ def _find_tensors(obj): # pragma: no cover
167
+ r"""
168
+ Recursively find all tensors contained in the specified object.
169
+ """
170
+ if isinstance(obj, torch.Tensor):
171
+ return [obj]
172
+ if isinstance(obj, (list, tuple)):
173
+ return itertools.chain(*map(_find_tensors, obj))
174
+ if isinstance(obj, dict):
175
+ return itertools.chain(*map(_find_tensors, obj.values()))
176
+ return []
177
+
178
+
179
+ class DDP(DistributedDataParallel):
180
+ """
181
+ Override the forward call in lightning so it goes to training and validation step respectively
182
+ """
183
+
184
+ def parallel_apply(self, replicas, inputs, kwargs):
185
+ return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
186
+
187
+ def forward(self, *inputs, **kwargs): # pragma: no cover
188
+ self._sync_params()
189
+ if self.device_ids:
190
+ inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
191
+ if len(self.device_ids) == 1:
192
+ # --------------
193
+ # LIGHTNING MOD
194
+ # --------------
195
+ # normal
196
+ # output = self.module(*inputs[0], **kwargs[0])
197
+ # lightning
198
+ if self.module.training:
199
+ output = self.module.training_step(*inputs[0], **kwargs[0])
200
+ elif self.module.testing:
201
+ output = self.module.test_step(*inputs[0], **kwargs[0])
202
+ else:
203
+ output = self.module.validation_step(*inputs[0], **kwargs[0])
204
+ else:
205
+ outputs = self.parallel_apply(self._module_copies[:len(inputs)], inputs, kwargs)
206
+ output = self.gather(outputs, self.output_device)
207
+ else:
208
+ # normal
209
+ output = self.module(*inputs, **kwargs)
210
+
211
+ if torch.is_grad_enabled():
212
+ # We'll return the output object verbatim since it is a freeform
213
+ # object. We need to find any tensors in this object, though,
214
+ # because we need to figure out which parameters were used during
215
+ # this forward pass, to ensure we short circuit reduction for any
216
+ # unused parameters. Only if `find_unused_parameters` is set.
217
+ if self.find_unused_parameters:
218
+ self.reducer.prepare_for_backward(list(_find_tensors(output)))
219
+ else:
220
+ self.reducer.prepare_for_backward([])
221
+ return output
222
+
223
+
224
+ class DP(DataParallel):
225
+ """
226
+ Override the forward call in lightning so it goes to training and validation step respectively
227
+ """
228
+
229
+ def forward(self, *inputs, **kwargs):
230
+ if not self.device_ids:
231
+ return self.module(*inputs, **kwargs)
232
+
233
+ for t in itertools.chain(self.module.parameters(), self.module.buffers()):
234
+ if t.device != self.src_device_obj:
235
+ raise RuntimeError("module must have its parameters and buffers "
236
+ "on device {} (device_ids[0]) but found one of "
237
+ "them on device: {}".format(self.src_device_obj, t.device))
238
+
239
+ inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
240
+ if len(self.device_ids) == 1:
241
+ # lightning
242
+ if self.module.training:
243
+ return self.module.training_step(*inputs[0], **kwargs[0])
244
+ elif self.module.testing:
245
+ return self.module.test_step(*inputs[0], **kwargs[0])
246
+ else:
247
+ return self.module.validation_step(*inputs[0], **kwargs[0])
248
+
249
+ replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
250
+ outputs = self.parallel_apply(replicas, inputs, kwargs)
251
+ return self.gather(outputs, self.output_device)
252
+
253
+ def parallel_apply(self, replicas, inputs, kwargs):
254
+ return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
255
+
256
+
257
+ class GradientAccumulationScheduler:
258
+ def __init__(self, scheduling: dict):
259
+ if scheduling == {}: # empty dict error
260
+ raise TypeError("Empty dict cannot be interpreted correct")
261
+
262
+ for key in scheduling.keys():
263
+ if not isinstance(key, int) or not isinstance(scheduling[key], int):
264
+ raise TypeError("All epoches and accumulation factor must be integers")
265
+
266
+ minimal_epoch = min(scheduling.keys())
267
+ if minimal_epoch < 1:
268
+ msg = f"Epochs indexing from 1, epoch {minimal_epoch} cannot be interpreted correct"
269
+ raise IndexError(msg)
270
+ elif minimal_epoch != 1: # if user didnt define first epoch accumulation factor
271
+ scheduling.update({1: 1})
272
+
273
+ self.scheduling = scheduling
274
+ self.epochs = sorted(scheduling.keys())
275
+
276
+ def on_epoch_begin(self, epoch, trainer):
277
+ epoch += 1 # indexing epochs from 1
278
+ for i in reversed(range(len(self.epochs))):
279
+ if epoch >= self.epochs[i]:
280
+ trainer.accumulate_grad_batches = self.scheduling.get(self.epochs[i])
281
+ break
282
+
283
+
284
+ class LatestModelCheckpoint(ModelCheckpoint):
285
+ def __init__(self, filepath, monitor='val_loss', verbose=0, num_ckpt_keep=5,
286
+ save_weights_only=False, mode='auto', period=1, prefix='model', save_best=True):
287
+ super(ModelCheckpoint, self).__init__()
288
+ self.monitor = monitor
289
+ self.verbose = verbose
290
+ self.filepath = filepath
291
+ os.makedirs(filepath, exist_ok=True)
292
+ self.num_ckpt_keep = num_ckpt_keep
293
+ self.save_best = save_best
294
+ self.save_weights_only = save_weights_only
295
+ self.period = period
296
+ self.epochs_since_last_check = 0
297
+ self.prefix = prefix
298
+ self.best_k_models = {}
299
+ # {filename: monitor}
300
+ self.kth_best_model = ''
301
+ self.save_top_k = 1
302
+ self.task = None
303
+ if mode == 'min':
304
+ self.monitor_op = np.less
305
+ self.best = np.Inf
306
+ self.mode = 'min'
307
+ elif mode == 'max':
308
+ self.monitor_op = np.greater
309
+ self.best = -np.Inf
310
+ self.mode = 'max'
311
+ else:
312
+ if 'acc' in self.monitor or self.monitor.startswith('fmeasure'):
313
+ self.monitor_op = np.greater
314
+ self.best = -np.Inf
315
+ self.mode = 'max'
316
+ else:
317
+ self.monitor_op = np.less
318
+ self.best = np.Inf
319
+ self.mode = 'min'
320
+ if os.path.exists(f'{self.filepath}/best_valid.npy'):
321
+ self.best = np.load(f'{self.filepath}/best_valid.npy')[0]
322
+
323
+ def get_all_ckpts(self):
324
+ return sorted(glob.glob(f'{self.filepath}/{self.prefix}_ckpt_steps_*.ckpt'),
325
+ key=lambda x: -int(re.findall('.*steps\_(\d+)\.ckpt', x)[0]))
326
+
327
+ def on_epoch_end(self, epoch, logs=None):
328
+ logs = logs or {}
329
+ self.epochs_since_last_check += 1
330
+ best_filepath = f'{self.filepath}/{self.prefix}_ckpt_best.pt'
331
+ if self.epochs_since_last_check >= self.period:
332
+ self.epochs_since_last_check = 0
333
+ filepath = f'{self.filepath}/{self.prefix}_ckpt_steps_{self.task.global_step}.ckpt'
334
+ if self.verbose > 0:
335
+ logging.info(f'Epoch {epoch:05d}@{self.task.global_step}: saving model to {filepath}')
336
+ self._save_model(filepath)
337
+ for old_ckpt in self.get_all_ckpts()[self.num_ckpt_keep:]:
338
+ subprocess.check_call(f'rm -rf "{old_ckpt}"', shell=True)
339
+ if self.verbose > 0:
340
+ logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}')
341
+ current = logs.get(self.monitor)
342
+ if current is not None and self.save_best:
343
+ if self.monitor_op(current, self.best):
344
+ self.best = current
345
+ if self.verbose > 0:
346
+ logging.info(
347
+ f'Epoch {epoch:05d}@{self.task.global_step}: {self.monitor} reached'
348
+ f' {current:0.5f} (best {self.best:0.5f}), saving model to'
349
+ f' {best_filepath} as top 1')
350
+ self._save_model(best_filepath)
351
+ np.save(f'{self.filepath}/best_valid.npy', [self.best])
352
+
353
+
354
+ class BaseTrainer:
355
+ def __init__(
356
+ self,
357
+ logger=True,
358
+ checkpoint_callback=True,
359
+ default_save_path=None,
360
+ gradient_clip_val=0,
361
+ process_position=0,
362
+ gpus=-1,
363
+ log_gpu_memory=None,
364
+ show_progress_bar=True,
365
+ track_grad_norm=-1,
366
+ check_val_every_n_epoch=1,
367
+ accumulate_grad_batches=1,
368
+ max_updates=1000,
369
+ min_epochs=1,
370
+ val_check_interval=1.0,
371
+ log_save_interval=100,
372
+ row_log_interval=10,
373
+ print_nan_grads=False,
374
+ weights_summary='full',
375
+ num_sanity_val_steps=5,
376
+ resume_from_checkpoint=None,
377
+ ):
378
+ self.log_gpu_memory = log_gpu_memory
379
+ self.gradient_clip_val = gradient_clip_val
380
+ self.check_val_every_n_epoch = check_val_every_n_epoch
381
+ self.track_grad_norm = track_grad_norm
382
+ self.on_gpu = True if (gpus and torch.cuda.is_available()) else False
383
+ self.process_position = process_position
384
+ self.weights_summary = weights_summary
385
+ self.max_updates = max_updates
386
+ self.min_epochs = min_epochs
387
+ self.num_sanity_val_steps = num_sanity_val_steps
388
+ self.print_nan_grads = print_nan_grads
389
+ self.resume_from_checkpoint = resume_from_checkpoint
390
+ self.default_save_path = default_save_path
391
+
392
+ # training bookeeping
393
+ self.total_batch_idx = 0
394
+ self.running_loss = []
395
+ self.avg_loss = 0
396
+ self.batch_idx = 0
397
+ self.tqdm_metrics = {}
398
+ self.callback_metrics = {}
399
+ self.num_val_batches = 0
400
+ self.num_training_batches = 0
401
+ self.num_test_batches = 0
402
+ self.get_train_dataloader = None
403
+ self.get_test_dataloaders = None
404
+ self.get_val_dataloaders = None
405
+ self.is_iterable_train_dataloader = False
406
+
407
+ # training state
408
+ self.model = None
409
+ self.testing = False
410
+ self.disable_validation = False
411
+ self.lr_schedulers = []
412
+ self.optimizers = None
413
+ self.global_step = 0
414
+ self.current_epoch = 0
415
+ self.total_batches = 0
416
+
417
+ # configure checkpoint callback
418
+ self.checkpoint_callback = checkpoint_callback
419
+ self.checkpoint_callback.save_function = self.save_checkpoint
420
+ self.weights_save_path = self.checkpoint_callback.filepath
421
+
422
+ # accumulated grads
423
+ self.configure_accumulated_gradients(accumulate_grad_batches)
424
+
425
+ # allow int, string and gpu list
426
+ self.data_parallel_device_ids = [
427
+ int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != '']
428
+ if len(self.data_parallel_device_ids) == 0:
429
+ self.root_gpu = None
430
+ self.on_gpu = False
431
+ else:
432
+ self.root_gpu = self.data_parallel_device_ids[0]
433
+ self.on_gpu = True
434
+
435
+ # distributed backend choice
436
+ self.use_ddp = False
437
+ self.use_dp = False
438
+ self.single_gpu = False
439
+ self.distributed_backend = 'ddp' if self.num_gpus > 0 else 'dp'
440
+ self.set_distributed_mode(self.distributed_backend)
441
+
442
+ self.proc_rank = 0
443
+ self.world_size = 1
444
+ self.node_rank = 0
445
+
446
+ # can't init progress bar here because starting a new process
447
+ # means the progress_bar won't survive pickling
448
+ self.show_progress_bar = show_progress_bar
449
+
450
+ # logging
451
+ self.log_save_interval = log_save_interval
452
+ self.val_check_interval = val_check_interval
453
+ self.logger = logger
454
+ self.logger.rank = 0
455
+ self.row_log_interval = row_log_interval
456
+
457
+ @property
458
+ def num_gpus(self):
459
+ gpus = self.data_parallel_device_ids
460
+ if gpus is None:
461
+ return 0
462
+ else:
463
+ return len(gpus)
464
+
465
+ @property
466
+ def data_parallel(self):
467
+ return self.use_dp or self.use_ddp
468
+
469
+ def get_model(self):
470
+ is_dp_module = isinstance(self.model, (DDP, DP))
471
+ model = self.model.module if is_dp_module else self.model
472
+ return model
473
+
474
+ # -----------------------------
475
+ # MODEL TRAINING
476
+ # -----------------------------
477
+ def fit(self, model):
478
+ if self.use_ddp:
479
+ mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
480
+ else:
481
+ model.model = model.build_model()
482
+ if not self.testing:
483
+ self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
484
+ if self.use_dp:
485
+ model.cuda(self.root_gpu)
486
+ model = DP(model, device_ids=self.data_parallel_device_ids)
487
+ elif self.single_gpu:
488
+ model.cuda(self.root_gpu)
489
+ self.run_pretrain_routine(model)
490
+ return 1
491
+
492
+ def init_optimizers(self, optimizers):
493
+
494
+ # single optimizer
495
+ if isinstance(optimizers, Optimizer):
496
+ return [optimizers], []
497
+
498
+ # two lists
499
+ elif len(optimizers) == 2 and isinstance(optimizers[0], list):
500
+ optimizers, lr_schedulers = optimizers
501
+ return optimizers, lr_schedulers
502
+
503
+ # single list or tuple
504
+ elif isinstance(optimizers, list) or isinstance(optimizers, tuple):
505
+ return optimizers, []
506
+
507
+ def run_pretrain_routine(self, model):
508
+ """Sanity check a few things before starting actual training.
509
+
510
+ :param model:
511
+ """
512
+ ref_model = model
513
+ if self.data_parallel:
514
+ ref_model = model.module
515
+
516
+ # give model convenience properties
517
+ ref_model.trainer = self
518
+
519
+ # set local properties on the model
520
+ self.copy_trainer_model_properties(ref_model)
521
+
522
+ # link up experiment object
523
+ if self.logger is not None:
524
+ ref_model.logger = self.logger
525
+ self.logger.save()
526
+
527
+ if self.use_ddp:
528
+ dist.barrier()
529
+
530
+ # set up checkpoint callback
531
+ # self.configure_checkpoint_callback()
532
+
533
+ # transfer data loaders from model
534
+ self.get_dataloaders(ref_model)
535
+
536
+ # track model now.
537
+ # if cluster resets state, the model will update with the saved weights
538
+ self.model = model
539
+
540
+ # restore training and model before hpc call
541
+ self.restore_weights(model)
542
+
543
+ # when testing requested only run test and return
544
+ if self.testing:
545
+ self.run_evaluation(test=True)
546
+ return
547
+
548
+ # check if we should run validation during training
549
+ self.disable_validation = self.num_val_batches == 0
550
+
551
+ # run tiny validation (if validation defined)
552
+ # to make sure program won't crash during val
553
+ ref_model.on_sanity_check_start()
554
+ ref_model.on_train_start()
555
+ if not self.disable_validation and self.num_sanity_val_steps > 0:
556
+ # init progress bars for validation sanity check
557
+ pbar = tqdm.tqdm(desc='Validation sanity check',
558
+ total=self.num_sanity_val_steps * len(self.get_val_dataloaders()),
559
+ leave=False, position=2 * self.process_position,
560
+ disable=not self.show_progress_bar, dynamic_ncols=True, unit='batch')
561
+ self.main_progress_bar = pbar
562
+ # dummy validation progress bar
563
+ self.val_progress_bar = tqdm.tqdm(disable=True)
564
+
565
+ self.evaluate(model, self.get_val_dataloaders(), self.num_sanity_val_steps, self.testing)
566
+
567
+ # close progress bars
568
+ self.main_progress_bar.close()
569
+ self.val_progress_bar.close()
570
+
571
+ # init progress bar
572
+ pbar = tqdm.tqdm(leave=True, position=2 * self.process_position,
573
+ disable=not self.show_progress_bar, dynamic_ncols=True, unit='batch',
574
+ file=sys.stdout)
575
+ self.main_progress_bar = pbar
576
+
577
+ # clear cache before training
578
+ if self.on_gpu:
579
+ torch.cuda.empty_cache()
580
+
581
+ # CORE TRAINING LOOP
582
+ self.train()
583
+
584
+ def test(self, model):
585
+ self.testing = True
586
+ self.fit(model)
587
+
588
+ @property
589
+ def training_tqdm_dict(self):
590
+ tqdm_dict = {
591
+ 'step': '{}'.format(self.global_step),
592
+ }
593
+ tqdm_dict.update(self.tqdm_metrics)
594
+ return tqdm_dict
595
+
596
+ # --------------------
597
+ # restore ckpt
598
+ # --------------------
599
+ def restore_weights(self, model):
600
+ """
601
+ To restore weights we have two cases.
602
+ First, attempt to restore hpc weights. If successful, don't restore
603
+ other weights.
604
+
605
+ Otherwise, try to restore actual weights
606
+ :param model:
607
+ :return:
608
+ """
609
+ # clear cache before restore
610
+ if self.on_gpu:
611
+ torch.cuda.empty_cache()
612
+
613
+ if self.resume_from_checkpoint is not None:
614
+ self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu)
615
+ else:
616
+ # restore weights if same exp version
617
+ self.restore_state_if_checkpoint_exists(model)
618
+
619
+ # wait for all models to restore weights
620
+ if self.use_ddp:
621
+ # wait for all processes to catch up
622
+ dist.barrier()
623
+
624
+ # clear cache after restore
625
+ if self.on_gpu:
626
+ torch.cuda.empty_cache()
627
+
628
+ def restore_state_if_checkpoint_exists(self, model):
629
+ did_restore = False
630
+
631
+ # do nothing if there's not dir or callback
632
+ no_ckpt_callback = (self.checkpoint_callback is None) or (not self.checkpoint_callback)
633
+ if no_ckpt_callback or not os.path.exists(self.checkpoint_callback.filepath):
634
+ return did_restore
635
+
636
+ # restore trainer state and model if there is a weight for this experiment
637
+ last_steps = -1
638
+ last_ckpt_name = None
639
+
640
+ # find last epoch
641
+ checkpoints = os.listdir(self.checkpoint_callback.filepath)
642
+ for name in checkpoints:
643
+ if '.ckpt' in name and not name.endswith('part'):
644
+ if 'steps_' in name:
645
+ steps = name.split('steps_')[1]
646
+ steps = int(re.sub('[^0-9]', '', steps))
647
+
648
+ if steps > last_steps:
649
+ last_steps = steps
650
+ last_ckpt_name = name
651
+
652
+ # restore last checkpoint
653
+ if last_ckpt_name is not None:
654
+ last_ckpt_path = os.path.join(self.checkpoint_callback.filepath, last_ckpt_name)
655
+ self.restore(last_ckpt_path, self.on_gpu)
656
+ logging.info(f'model and trainer restored from checkpoint: {last_ckpt_path}')
657
+ did_restore = True
658
+
659
+ return did_restore
660
+
661
+ def restore(self, checkpoint_path, on_gpu):
662
+ checkpoint = torch.load(checkpoint_path, map_location='cpu')
663
+
664
+ # load model state
665
+ model = self.get_model()
666
+
667
+ # load the state_dict on the model automatically
668
+ model.load_state_dict(checkpoint['state_dict'], strict=False)
669
+ if on_gpu:
670
+ model.cuda(self.root_gpu)
671
+ # load training state (affects trainer only)
672
+ self.restore_training_state(checkpoint)
673
+ model.global_step = self.global_step
674
+ del checkpoint
675
+
676
+ try:
677
+ if dist.is_initialized() and dist.get_rank() > 0:
678
+ return
679
+ except Exception as e:
680
+ print(e)
681
+ return
682
+
683
+ def restore_training_state(self, checkpoint):
684
+ """
685
+ Restore trainer state.
686
+ Model will get its change to update
687
+ :param checkpoint:
688
+ :return:
689
+ """
690
+ if self.checkpoint_callback is not None and self.checkpoint_callback is not False:
691
+ self.checkpoint_callback.best = checkpoint['checkpoint_callback_best']
692
+
693
+ self.global_step = checkpoint['global_step']
694
+ self.current_epoch = checkpoint['epoch']
695
+
696
+ if self.testing:
697
+ return
698
+
699
+ # restore the optimizers
700
+ optimizer_states = checkpoint['optimizer_states']
701
+ for optimizer, opt_state in zip(self.optimizers, optimizer_states):
702
+ if optimizer is None:
703
+ return
704
+ optimizer.load_state_dict(opt_state)
705
+
706
+ # move optimizer to GPU 1 weight at a time
707
+ # avoids OOM
708
+ if self.root_gpu is not None:
709
+ for state in optimizer.state.values():
710
+ for k, v in state.items():
711
+ if isinstance(v, torch.Tensor):
712
+ state[k] = v.cuda(self.root_gpu)
713
+
714
+ # restore the lr schedulers
715
+ lr_schedulers = checkpoint['lr_schedulers']
716
+ for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
717
+ scheduler.load_state_dict(lrs_state)
718
+
719
+ # --------------------
720
+ # MODEL SAVE CHECKPOINT
721
+ # --------------------
722
+ def _atomic_save(self, checkpoint, filepath):
723
+ """Saves a checkpoint atomically, avoiding the creation of incomplete checkpoints.
724
+
725
+ This will create a temporary checkpoint with a suffix of ``.part``, then copy it to the final location once
726
+ saving is finished.
727
+
728
+ Args:
729
+ checkpoint (object): The object to save.
730
+ Built to be used with the ``dump_checkpoint`` method, but can deal with anything which ``torch.save``
731
+ accepts.
732
+ filepath (str|pathlib.Path): The path to which the checkpoint will be saved.
733
+ This points to the file that the checkpoint will be stored in.
734
+ """
735
+ tmp_path = str(filepath) + ".part"
736
+ torch.save(checkpoint, tmp_path)
737
+ os.replace(tmp_path, filepath)
738
+
739
+ def save_checkpoint(self, filepath):
740
+ checkpoint = self.dump_checkpoint()
741
+ self._atomic_save(checkpoint, filepath)
742
+
743
+ def dump_checkpoint(self):
744
+
745
+ checkpoint = {
746
+ 'epoch': self.current_epoch,
747
+ 'global_step': self.global_step
748
+ }
749
+
750
+ if self.checkpoint_callback is not None and self.checkpoint_callback is not False:
751
+ checkpoint['checkpoint_callback_best'] = self.checkpoint_callback.best
752
+
753
+ # save optimizers
754
+ optimizer_states = []
755
+ for i, optimizer in enumerate(self.optimizers):
756
+ if optimizer is not None:
757
+ optimizer_states.append(optimizer.state_dict())
758
+
759
+ checkpoint['optimizer_states'] = optimizer_states
760
+
761
+ # save lr schedulers
762
+ lr_schedulers = []
763
+ for i, scheduler in enumerate(self.lr_schedulers):
764
+ lr_schedulers.append(scheduler.state_dict())
765
+
766
+ checkpoint['lr_schedulers'] = lr_schedulers
767
+
768
+ # add the hparams and state_dict from the model
769
+ model = self.get_model()
770
+ checkpoint['state_dict'] = model.state_dict()
771
+ # give the model a chance to add a few things
772
+ model.on_save_checkpoint(checkpoint)
773
+
774
+ return checkpoint
775
+
776
+ def copy_trainer_model_properties(self, model):
777
+ if isinstance(model, DP):
778
+ ref_model = model.module
779
+ elif isinstance(model, DDP):
780
+ ref_model = model.module
781
+ else:
782
+ ref_model = model
783
+
784
+ for m in [model, ref_model]:
785
+ m.trainer = self
786
+ m.on_gpu = self.on_gpu
787
+ m.use_dp = self.use_dp
788
+ m.use_ddp = self.use_ddp
789
+ m.testing = self.testing
790
+ m.single_gpu = self.single_gpu
791
+
792
+ def transfer_batch_to_gpu(self, batch, gpu_id):
793
+ # base case: object can be directly moved using `cuda` or `to`
794
+ if callable(getattr(batch, 'cuda', None)):
795
+ return batch.cuda(gpu_id, non_blocking=True)
796
+
797
+ elif callable(getattr(batch, 'to', None)):
798
+ return batch.to(torch.device('cuda', gpu_id), non_blocking=True)
799
+
800
+ # when list
801
+ elif isinstance(batch, list):
802
+ for i, x in enumerate(batch):
803
+ batch[i] = self.transfer_batch_to_gpu(x, gpu_id)
804
+ return batch
805
+
806
+ # when tuple
807
+ elif isinstance(batch, tuple):
808
+ batch = list(batch)
809
+ for i, x in enumerate(batch):
810
+ batch[i] = self.transfer_batch_to_gpu(x, gpu_id)
811
+ return tuple(batch)
812
+
813
+ # when dict
814
+ elif isinstance(batch, dict):
815
+ for k, v in batch.items():
816
+ batch[k] = self.transfer_batch_to_gpu(v, gpu_id)
817
+
818
+ return batch
819
+
820
+ # nothing matches, return the value as is without transform
821
+ return batch
822
+
823
+ def set_distributed_mode(self, distributed_backend):
824
+ # skip for CPU
825
+ if self.num_gpus == 0:
826
+ return
827
+
828
+ # single GPU case
829
+ # in single gpu case we allow ddp so we can train on multiple
830
+ # nodes, 1 gpu per node
831
+ elif self.num_gpus == 1:
832
+ self.single_gpu = True
833
+ self.use_dp = False
834
+ self.use_ddp = False
835
+ self.root_gpu = 0
836
+ self.data_parallel_device_ids = [0]
837
+ else:
838
+ if distributed_backend is not None:
839
+ self.use_dp = distributed_backend == 'dp'
840
+ self.use_ddp = distributed_backend == 'ddp'
841
+ elif distributed_backend is None:
842
+ self.use_dp = True
843
+ self.use_ddp = False
844
+
845
+ logging.info(f'gpu available: {torch.cuda.is_available()}, used: {self.on_gpu}')
846
+
847
+ def ddp_train(self, gpu_idx, model):
848
+ """
849
+ Entry point into a DP thread
850
+ :param gpu_idx:
851
+ :param model:
852
+ :param cluster_obj:
853
+ :return:
854
+ """
855
+ # otherwise default to node rank 0
856
+ self.node_rank = 0
857
+
858
+ # show progressbar only on progress_rank 0
859
+ self.show_progress_bar = self.show_progress_bar and self.node_rank == 0 and gpu_idx == 0
860
+
861
+ # determine which process we are and world size
862
+ if self.use_ddp:
863
+ self.proc_rank = self.node_rank * self.num_gpus + gpu_idx
864
+ self.world_size = self.num_gpus
865
+
866
+ # let the exp know the rank to avoid overwriting logs
867
+ if self.logger is not None:
868
+ self.logger.rank = self.proc_rank
869
+
870
+ # set up server using proc 0's ip address
871
+ # try to init for 20 times at max in case ports are taken
872
+ # where to store ip_table
873
+ model.trainer = self
874
+ model.init_ddp_connection(self.proc_rank, self.world_size)
875
+
876
+ # CHOOSE OPTIMIZER
877
+ # allow for lr schedulers as well
878
+ model.model = model.build_model()
879
+ if not self.testing:
880
+ self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
881
+
882
+ # MODEL
883
+ # copy model to each gpu
884
+ if self.distributed_backend == 'ddp':
885
+ torch.cuda.set_device(gpu_idx)
886
+ model.cuda(gpu_idx)
887
+
888
+ # set model properties before going into wrapper
889
+ self.copy_trainer_model_properties(model)
890
+
891
+ # override root GPU
892
+ self.root_gpu = gpu_idx
893
+
894
+ if self.distributed_backend == 'ddp':
895
+ device_ids = [gpu_idx]
896
+ else:
897
+ device_ids = None
898
+
899
+ # allow user to configure ddp
900
+ model = model.configure_ddp(model, device_ids)
901
+
902
+ # continue training routine
903
+ self.run_pretrain_routine(model)
904
+
905
+ def resolve_root_node_address(self, root_node):
906
+ if '[' in root_node:
907
+ name = root_node.split('[')[0]
908
+ number = root_node.split(',')[0]
909
+ if '-' in number:
910
+ number = number.split('-')[0]
911
+
912
+ number = re.sub('[^0-9]', '', number)
913
+ root_node = name + number
914
+
915
+ return root_node
916
+
917
+ def log_metrics(self, metrics, grad_norm_dic, step=None):
918
+ """Logs the metric dict passed in.
919
+
920
+ :param metrics:
921
+ :param grad_norm_dic:
922
+ """
923
+ # added metrics by Lightning for convenience
924
+ metrics['epoch'] = self.current_epoch
925
+
926
+ # add norms
927
+ metrics.update(grad_norm_dic)
928
+
929
+ # turn all tensors to scalars
930
+ scalar_metrics = self.metrics_to_scalars(metrics)
931
+
932
+ step = step if step is not None else self.global_step
933
+ # log actual metrics
934
+ if self.proc_rank == 0 and self.logger is not None:
935
+ self.logger.log_metrics(scalar_metrics, step=step)
936
+ self.logger.save()
937
+
938
+ def add_tqdm_metrics(self, metrics):
939
+ for k, v in metrics.items():
940
+ if type(v) is torch.Tensor:
941
+ v = v.item()
942
+
943
+ self.tqdm_metrics[k] = v
944
+
945
+ def metrics_to_scalars(self, metrics):
946
+ new_metrics = {}
947
+ for k, v in metrics.items():
948
+ if isinstance(v, torch.Tensor):
949
+ v = v.item()
950
+
951
+ if type(v) is dict:
952
+ v = self.metrics_to_scalars(v)
953
+
954
+ new_metrics[k] = v
955
+
956
+ return new_metrics
957
+
958
+ def process_output(self, output, train=False):
959
+ """Reduces output according to the training mode.
960
+
961
+ Separates loss from logging and tqdm metrics
962
+ :param output:
963
+ :return:
964
+ """
965
+ # ---------------
966
+ # EXTRACT CALLBACK KEYS
967
+ # ---------------
968
+ # all keys not progress_bar or log are candidates for callbacks
969
+ callback_metrics = {}
970
+ for k, v in output.items():
971
+ if k not in ['progress_bar', 'log', 'hiddens']:
972
+ callback_metrics[k] = v
973
+
974
+ if train and self.use_dp:
975
+ num_gpus = self.num_gpus
976
+ callback_metrics = self.reduce_distributed_output(callback_metrics, num_gpus)
977
+
978
+ for k, v in callback_metrics.items():
979
+ if isinstance(v, torch.Tensor):
980
+ callback_metrics[k] = v.item()
981
+
982
+ # ---------------
983
+ # EXTRACT PROGRESS BAR KEYS
984
+ # ---------------
985
+ try:
986
+ progress_output = output['progress_bar']
987
+
988
+ # reduce progress metrics for tqdm when using dp
989
+ if train and self.use_dp:
990
+ num_gpus = self.num_gpus
991
+ progress_output = self.reduce_distributed_output(progress_output, num_gpus)
992
+
993
+ progress_bar_metrics = progress_output
994
+ except Exception:
995
+ progress_bar_metrics = {}
996
+
997
+ # ---------------
998
+ # EXTRACT LOGGING KEYS
999
+ # ---------------
1000
+ # extract metrics to log to experiment
1001
+ try:
1002
+ log_output = output['log']
1003
+
1004
+ # reduce progress metrics for tqdm when using dp
1005
+ if train and self.use_dp:
1006
+ num_gpus = self.num_gpus
1007
+ log_output = self.reduce_distributed_output(log_output, num_gpus)
1008
+
1009
+ log_metrics = log_output
1010
+ except Exception:
1011
+ log_metrics = {}
1012
+
1013
+ # ---------------
1014
+ # EXTRACT LOSS
1015
+ # ---------------
1016
+ # if output dict doesn't have the keyword loss
1017
+ # then assume the output=loss if scalar
1018
+ loss = None
1019
+ if train:
1020
+ try:
1021
+ loss = output['loss']
1022
+ except Exception:
1023
+ if type(output) is torch.Tensor:
1024
+ loss = output
1025
+ else:
1026
+ raise RuntimeError(
1027
+ 'No `loss` value in the dictionary returned from `model.training_step()`.'
1028
+ )
1029
+
1030
+ # when using dp need to reduce the loss
1031
+ if self.use_dp:
1032
+ loss = self.reduce_distributed_output(loss, self.num_gpus)
1033
+
1034
+ # ---------------
1035
+ # EXTRACT HIDDEN
1036
+ # ---------------
1037
+ hiddens = output.get('hiddens')
1038
+
1039
+ # use every metric passed in as a candidate for callback
1040
+ callback_metrics.update(progress_bar_metrics)
1041
+ callback_metrics.update(log_metrics)
1042
+
1043
+ # convert tensors to numpy
1044
+ for k, v in callback_metrics.items():
1045
+ if isinstance(v, torch.Tensor):
1046
+ callback_metrics[k] = v.item()
1047
+
1048
+ return loss, progress_bar_metrics, log_metrics, callback_metrics, hiddens
1049
+
1050
+ def reduce_distributed_output(self, output, num_gpus):
1051
+ if num_gpus <= 1:
1052
+ return output
1053
+
1054
+ # when using DP, we get one output per gpu
1055
+ # average outputs and return
1056
+ if type(output) is torch.Tensor:
1057
+ return output.mean()
1058
+
1059
+ for k, v in output.items():
1060
+ # recurse on nested dics
1061
+ if isinstance(output[k], dict):
1062
+ output[k] = self.reduce_distributed_output(output[k], num_gpus)
1063
+
1064
+ # do nothing when there's a scalar
1065
+ elif isinstance(output[k], torch.Tensor) and output[k].dim() == 0:
1066
+ pass
1067
+
1068
+ # reduce only metrics that have the same number of gpus
1069
+ elif output[k].size(0) == num_gpus:
1070
+ reduced = torch.mean(output[k])
1071
+ output[k] = reduced
1072
+ return output
1073
+
1074
+ def clip_gradients(self):
1075
+ if self.gradient_clip_val > 0:
1076
+ model = self.get_model()
1077
+ torch.nn.utils.clip_grad_norm_(model.parameters(), self.gradient_clip_val)
1078
+
1079
+ def print_nan_gradients(self):
1080
+ model = self.get_model()
1081
+ for param in model.parameters():
1082
+ if (param.grad is not None) and torch.isnan(param.grad.float()).any():
1083
+ logging.info(param, param.grad)
1084
+
1085
+ def configure_accumulated_gradients(self, accumulate_grad_batches):
1086
+ self.accumulate_grad_batches = None
1087
+
1088
+ if isinstance(accumulate_grad_batches, dict):
1089
+ self.accumulation_scheduler = GradientAccumulationScheduler(accumulate_grad_batches)
1090
+ elif isinstance(accumulate_grad_batches, int):
1091
+ schedule = {1: accumulate_grad_batches}
1092
+ self.accumulation_scheduler = GradientAccumulationScheduler(schedule)
1093
+ else:
1094
+ raise TypeError("Gradient accumulation supports only int and dict types")
1095
+
1096
+ def get_dataloaders(self, model):
1097
+ if not self.testing:
1098
+ self.init_train_dataloader(model)
1099
+ self.init_val_dataloader(model)
1100
+ else:
1101
+ self.init_test_dataloader(model)
1102
+
1103
+ if self.use_ddp:
1104
+ dist.barrier()
1105
+ if not self.testing:
1106
+ self.get_train_dataloader()
1107
+ self.get_val_dataloaders()
1108
+ else:
1109
+ self.get_test_dataloaders()
1110
+
1111
+ def init_train_dataloader(self, model):
1112
+ self.fisrt_epoch = True
1113
+ self.get_train_dataloader = model.train_dataloader
1114
+ if isinstance(self.get_train_dataloader(), torch.utils.data.DataLoader):
1115
+ self.num_training_batches = len(self.get_train_dataloader())
1116
+ self.num_training_batches = int(self.num_training_batches)
1117
+ else:
1118
+ self.num_training_batches = float('inf')
1119
+ self.is_iterable_train_dataloader = True
1120
+ if isinstance(self.val_check_interval, int):
1121
+ self.val_check_batch = self.val_check_interval
1122
+ else:
1123
+ self._percent_range_check('val_check_interval')
1124
+ self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
1125
+ self.val_check_batch = max(1, self.val_check_batch)
1126
+
1127
+ def init_val_dataloader(self, model):
1128
+ self.get_val_dataloaders = model.val_dataloader
1129
+ self.num_val_batches = 0
1130
+ if self.get_val_dataloaders() is not None:
1131
+ if isinstance(self.get_val_dataloaders()[0], torch.utils.data.DataLoader):
1132
+ self.num_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())
1133
+ self.num_val_batches = int(self.num_val_batches)
1134
+ else:
1135
+ self.num_val_batches = float('inf')
1136
+
1137
+ def init_test_dataloader(self, model):
1138
+ self.get_test_dataloaders = model.test_dataloader
1139
+ if self.get_test_dataloaders() is not None:
1140
+ if isinstance(self.get_test_dataloaders()[0], torch.utils.data.DataLoader):
1141
+ self.num_test_batches = sum(len(dataloader) for dataloader in self.get_test_dataloaders())
1142
+ self.num_test_batches = int(self.num_test_batches)
1143
+ else:
1144
+ self.num_test_batches = float('inf')
1145
+
1146
+ def evaluate(self, model, dataloaders, max_batches, test=False):
1147
+ """Run evaluation code.
1148
+
1149
+ :param model: PT model
1150
+ :param dataloaders: list of PT dataloaders
1151
+ :param max_batches: Scalar
1152
+ :param test: boolean
1153
+ :return:
1154
+ """
1155
+ # enable eval mode
1156
+ model.zero_grad()
1157
+ model.eval()
1158
+
1159
+ # copy properties for forward overrides
1160
+ self.copy_trainer_model_properties(model)
1161
+
1162
+ # disable gradients to save memory
1163
+ torch.set_grad_enabled(False)
1164
+
1165
+ if test:
1166
+ self.get_model().test_start()
1167
+ # bookkeeping
1168
+ outputs = []
1169
+
1170
+ # run training
1171
+ for dataloader_idx, dataloader in enumerate(dataloaders):
1172
+ dl_outputs = []
1173
+ for batch_idx, batch in enumerate(dataloader):
1174
+
1175
+ if batch is None: # pragma: no cover
1176
+ continue
1177
+
1178
+ # stop short when on fast_dev_run (sets max_batch=1)
1179
+ if batch_idx >= max_batches:
1180
+ break
1181
+
1182
+ # -----------------
1183
+ # RUN EVALUATION STEP
1184
+ # -----------------
1185
+ output = self.evaluation_forward(model,
1186
+ batch,
1187
+ batch_idx,
1188
+ dataloader_idx,
1189
+ test)
1190
+
1191
+ # track outputs for collation
1192
+ dl_outputs.append(output)
1193
+
1194
+ # batch done
1195
+ if test:
1196
+ self.test_progress_bar.update(1)
1197
+ else:
1198
+ self.val_progress_bar.update(1)
1199
+ outputs.append(dl_outputs)
1200
+
1201
+ # with a single dataloader don't pass an array
1202
+ if len(dataloaders) == 1:
1203
+ outputs = outputs[0]
1204
+
1205
+ # give model a chance to do something with the outputs (and method defined)
1206
+ model = self.get_model()
1207
+ if test:
1208
+ eval_results_ = model.test_end(outputs)
1209
+ else:
1210
+ eval_results_ = model.validation_end(outputs)
1211
+ eval_results = eval_results_
1212
+
1213
+ # enable train mode again
1214
+ model.train()
1215
+
1216
+ # enable gradients to save memory
1217
+ torch.set_grad_enabled(True)
1218
+
1219
+ return eval_results
1220
+
1221
+ def run_evaluation(self, test=False):
1222
+ # when testing make sure user defined a test step
1223
+ model = self.get_model()
1224
+ model.on_pre_performance_check()
1225
+
1226
+ # select dataloaders
1227
+ if test:
1228
+ dataloaders = self.get_test_dataloaders()
1229
+ max_batches = self.num_test_batches
1230
+ else:
1231
+ # val
1232
+ dataloaders = self.get_val_dataloaders()
1233
+ max_batches = self.num_val_batches
1234
+
1235
+ # init validation or test progress bar
1236
+ # main progress bar will already be closed when testing so initial position is free
1237
+ position = 2 * self.process_position + (not test)
1238
+ desc = 'Testing' if test else 'Validating'
1239
+ pbar = tqdm.tqdm(desc=desc, total=max_batches, leave=test, position=position,
1240
+ disable=not self.show_progress_bar, dynamic_ncols=True,
1241
+ unit='batch', file=sys.stdout)
1242
+ setattr(self, f'{"test" if test else "val"}_progress_bar', pbar)
1243
+
1244
+ # run evaluation
1245
+ eval_results = self.evaluate(self.model,
1246
+ dataloaders,
1247
+ max_batches,
1248
+ test)
1249
+ if eval_results is not None:
1250
+ _, prog_bar_metrics, log_metrics, callback_metrics, _ = self.process_output(
1251
+ eval_results)
1252
+
1253
+ # add metrics to prog bar
1254
+ self.add_tqdm_metrics(prog_bar_metrics)
1255
+
1256
+ # log metrics
1257
+ self.log_metrics(log_metrics, {})
1258
+
1259
+ # track metrics for callbacks
1260
+ self.callback_metrics.update(callback_metrics)
1261
+
1262
+ # hook
1263
+ model.on_post_performance_check()
1264
+
1265
+ # add model specific metrics
1266
+ tqdm_metrics = self.training_tqdm_dict
1267
+ if not test:
1268
+ self.main_progress_bar.set_postfix(**tqdm_metrics)
1269
+
1270
+ # close progress bar
1271
+ if test:
1272
+ self.test_progress_bar.close()
1273
+ else:
1274
+ self.val_progress_bar.close()
1275
+
1276
+ # model checkpointing
1277
+ if self.proc_rank == 0 and self.checkpoint_callback is not None and not test:
1278
+ self.checkpoint_callback.on_epoch_end(epoch=self.current_epoch,
1279
+ logs=self.callback_metrics)
1280
+
1281
+ def evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test=False):
1282
+ # make dataloader_idx arg in validation_step optional
1283
+ args = [batch, batch_idx]
1284
+
1285
+ if test and len(self.get_test_dataloaders()) > 1:
1286
+ args.append(dataloader_idx)
1287
+
1288
+ elif not test and len(self.get_val_dataloaders()) > 1:
1289
+ args.append(dataloader_idx)
1290
+
1291
+ # handle DP, DDP forward
1292
+ if self.use_ddp or self.use_dp:
1293
+ output = model(*args)
1294
+ return output
1295
+
1296
+ # single GPU
1297
+ if self.single_gpu:
1298
+ # for single GPU put inputs on gpu manually
1299
+ root_gpu = 0
1300
+ if isinstance(self.data_parallel_device_ids, list):
1301
+ root_gpu = self.data_parallel_device_ids[0]
1302
+ batch = self.transfer_batch_to_gpu(batch, root_gpu)
1303
+ args[0] = batch
1304
+
1305
+ # CPU
1306
+ if test:
1307
+ output = model.test_step(*args)
1308
+ else:
1309
+ output = model.validation_step(*args)
1310
+
1311
+ return output
1312
+
1313
+ def train(self):
1314
+ model = self.get_model()
1315
+ # run all epochs
1316
+ for epoch in range(self.current_epoch, 1000000):
1317
+ # set seed for distributed sampler (enables shuffling for each epoch)
1318
+ if self.use_ddp and hasattr(self.get_train_dataloader().sampler, 'set_epoch'):
1319
+ self.get_train_dataloader().sampler.set_epoch(epoch)
1320
+
1321
+ # get model
1322
+ model = self.get_model()
1323
+
1324
+ # update training progress in trainer and model
1325
+ model.current_epoch = epoch
1326
+ self.current_epoch = epoch
1327
+
1328
+ total_val_batches = 0
1329
+ if not self.disable_validation:
1330
+ # val can be checked multiple times in epoch
1331
+ is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0
1332
+ val_checks_per_epoch = self.num_training_batches // self.val_check_batch
1333
+ val_checks_per_epoch = val_checks_per_epoch if is_val_epoch else 0
1334
+ total_val_batches = self.num_val_batches * val_checks_per_epoch
1335
+
1336
+ # total batches includes multiple val checks
1337
+ self.total_batches = self.num_training_batches + total_val_batches
1338
+ self.batch_loss_value = 0 # accumulated grads
1339
+
1340
+ if self.is_iterable_train_dataloader:
1341
+ # for iterable train loader, the progress bar never ends
1342
+ num_iterations = None
1343
+ else:
1344
+ num_iterations = self.total_batches
1345
+
1346
+ # reset progress bar
1347
+ # .reset() doesn't work on disabled progress bar so we should check
1348
+ desc = f'Epoch {epoch + 1}' if not self.is_iterable_train_dataloader else ''
1349
+ self.main_progress_bar.set_description(desc)
1350
+
1351
+ # changing gradient according accumulation_scheduler
1352
+ self.accumulation_scheduler.on_epoch_begin(epoch, self)
1353
+
1354
+ # -----------------
1355
+ # RUN TNG EPOCH
1356
+ # -----------------
1357
+ self.run_training_epoch()
1358
+
1359
+ # update LR schedulers
1360
+ if self.lr_schedulers is not None:
1361
+ for lr_scheduler in self.lr_schedulers:
1362
+ lr_scheduler.step(epoch=self.current_epoch)
1363
+
1364
+ self.main_progress_bar.close()
1365
+
1366
+ model.on_train_end()
1367
+
1368
+ if self.logger is not None:
1369
+ self.logger.finalize("success")
1370
+
1371
+ def run_training_epoch(self):
1372
+ # before epoch hook
1373
+ if self.is_function_implemented('on_epoch_start'):
1374
+ model = self.get_model()
1375
+ model.on_epoch_start()
1376
+
1377
+ # run epoch
1378
+ for batch_idx, batch in enumerate(self.get_train_dataloader()):
1379
+ # stop epoch if we limited the number of training batches
1380
+ if batch_idx >= self.num_training_batches:
1381
+ break
1382
+
1383
+ self.batch_idx = batch_idx
1384
+
1385
+ model = self.get_model()
1386
+ model.global_step = self.global_step
1387
+
1388
+ # ---------------
1389
+ # RUN TRAIN STEP
1390
+ # ---------------
1391
+ output = self.run_training_batch(batch, batch_idx)
1392
+ batch_result, grad_norm_dic, batch_step_metrics = output
1393
+
1394
+ # when returning -1 from train_step, we end epoch early
1395
+ early_stop_epoch = batch_result == -1
1396
+
1397
+ # ---------------
1398
+ # RUN VAL STEP
1399
+ # ---------------
1400
+ should_check_val = (
1401
+ not self.disable_validation and self.global_step % self.val_check_batch == 0 and not self.fisrt_epoch)
1402
+ self.fisrt_epoch = False
1403
+
1404
+ if should_check_val:
1405
+ self.run_evaluation(test=self.testing)
1406
+
1407
+ # when logs should be saved
1408
+ should_save_log = (batch_idx + 1) % self.log_save_interval == 0 or early_stop_epoch
1409
+ if should_save_log:
1410
+ if self.proc_rank == 0 and self.logger is not None:
1411
+ self.logger.save()
1412
+
1413
+ # when metrics should be logged
1414
+ should_log_metrics = batch_idx % self.row_log_interval == 0 or early_stop_epoch
1415
+ if should_log_metrics:
1416
+ # logs user requested information to logger
1417
+ self.log_metrics(batch_step_metrics, grad_norm_dic)
1418
+
1419
+ self.global_step += 1
1420
+ self.total_batch_idx += 1
1421
+
1422
+ # end epoch early
1423
+ # stop when the flag is changed or we've gone past the amount
1424
+ # requested in the batches
1425
+ if early_stop_epoch:
1426
+ break
1427
+ if self.global_step > self.max_updates:
1428
+ print("| Training end..")
1429
+ exit()
1430
+
1431
+ # epoch end hook
1432
+ if self.is_function_implemented('on_epoch_end'):
1433
+ model = self.get_model()
1434
+ model.on_epoch_end()
1435
+
1436
+ def run_training_batch(self, batch, batch_idx):
1437
+ # track grad norms
1438
+ grad_norm_dic = {}
1439
+
1440
+ # track all metrics for callbacks
1441
+ all_callback_metrics = []
1442
+
1443
+ # track metrics to log
1444
+ all_log_metrics = []
1445
+
1446
+ if batch is None:
1447
+ return 0, grad_norm_dic, {}
1448
+
1449
+ # hook
1450
+ if self.is_function_implemented('on_batch_start'):
1451
+ model_ref = self.get_model()
1452
+ response = model_ref.on_batch_start(batch)
1453
+
1454
+ if response == -1:
1455
+ return -1, grad_norm_dic, {}
1456
+
1457
+ splits = [batch]
1458
+ self.hiddens = None
1459
+ for split_idx, split_batch in enumerate(splits):
1460
+ self.split_idx = split_idx
1461
+
1462
+ # call training_step once per optimizer
1463
+ for opt_idx, optimizer in enumerate(self.optimizers):
1464
+ if optimizer is None:
1465
+ continue
1466
+ # make sure only the gradients of the current optimizer's paramaters are calculated
1467
+ # in the training step to prevent dangling gradients in multiple-optimizer setup.
1468
+ if len(self.optimizers) > 1:
1469
+ for param in self.get_model().parameters():
1470
+ param.requires_grad = False
1471
+ for group in optimizer.param_groups:
1472
+ for param in group['params']:
1473
+ param.requires_grad = True
1474
+
1475
+ # wrap the forward step in a closure so second order methods work
1476
+ def optimizer_closure():
1477
+ # forward pass
1478
+ output = self.training_forward(
1479
+ split_batch, batch_idx, opt_idx, self.hiddens)
1480
+
1481
+ closure_loss = output[0]
1482
+ progress_bar_metrics = output[1]
1483
+ log_metrics = output[2]
1484
+ callback_metrics = output[3]
1485
+ self.hiddens = output[4]
1486
+ if closure_loss is None:
1487
+ return None
1488
+
1489
+ # accumulate loss
1490
+ # (if accumulate_grad_batches = 1 no effect)
1491
+ closure_loss = closure_loss / self.accumulate_grad_batches
1492
+
1493
+ # backward pass
1494
+ model_ref = self.get_model()
1495
+ if closure_loss.requires_grad:
1496
+ model_ref.backward(closure_loss, optimizer)
1497
+
1498
+ # track metrics for callbacks
1499
+ all_callback_metrics.append(callback_metrics)
1500
+
1501
+ # track progress bar metrics
1502
+ self.add_tqdm_metrics(progress_bar_metrics)
1503
+ all_log_metrics.append(log_metrics)
1504
+
1505
+ # insert after step hook
1506
+ if self.is_function_implemented('on_after_backward'):
1507
+ model_ref = self.get_model()
1508
+ model_ref.on_after_backward()
1509
+
1510
+ return closure_loss
1511
+
1512
+ # calculate loss
1513
+ loss = optimizer_closure()
1514
+ if loss is None:
1515
+ continue
1516
+
1517
+ # nan grads
1518
+ if self.print_nan_grads:
1519
+ self.print_nan_gradients()
1520
+
1521
+ # track total loss for logging (avoid mem leaks)
1522
+ self.batch_loss_value += loss.item()
1523
+
1524
+ # gradient update with accumulated gradients
1525
+ if (self.batch_idx + 1) % self.accumulate_grad_batches == 0:
1526
+
1527
+ # track gradient norms when requested
1528
+ if batch_idx % self.row_log_interval == 0:
1529
+ if self.track_grad_norm > 0:
1530
+ model = self.get_model()
1531
+ grad_norm_dic = model.grad_norm(
1532
+ self.track_grad_norm)
1533
+
1534
+ # clip gradients
1535
+ self.clip_gradients()
1536
+
1537
+ # calls .step(), .zero_grad()
1538
+ # override function to modify this behavior
1539
+ model = self.get_model()
1540
+ model.optimizer_step(self.current_epoch, batch_idx, optimizer, opt_idx)
1541
+
1542
+ # calculate running loss for display
1543
+ self.running_loss.append(self.batch_loss_value)
1544
+ self.batch_loss_value = 0
1545
+ self.avg_loss = np.mean(self.running_loss[-100:])
1546
+
1547
+ # activate batch end hook
1548
+ if self.is_function_implemented('on_batch_end'):
1549
+ model = self.get_model()
1550
+ model.on_batch_end()
1551
+
1552
+ # update progress bar
1553
+ self.main_progress_bar.update(1)
1554
+ self.main_progress_bar.set_postfix(**self.training_tqdm_dict)
1555
+
1556
+ # collapse all metrics into one dict
1557
+ all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}
1558
+
1559
+ # track all metrics for callbacks
1560
+ self.callback_metrics.update({k: v for d in all_callback_metrics for k, v in d.items()})
1561
+
1562
+ return 0, grad_norm_dic, all_log_metrics
1563
+
1564
+ def training_forward(self, batch, batch_idx, opt_idx, hiddens):
1565
+ """
1566
+ Handle forward for each training case (distributed, single gpu, etc...)
1567
+ :param batch:
1568
+ :param batch_idx:
1569
+ :return:
1570
+ """
1571
+ # ---------------
1572
+ # FORWARD
1573
+ # ---------------
1574
+ # enable not needing to add opt_idx to training_step
1575
+ args = [batch, batch_idx, opt_idx]
1576
+
1577
+ # distributed forward
1578
+ if self.use_ddp or self.use_dp:
1579
+ output = self.model(*args)
1580
+ # single GPU forward
1581
+ elif self.single_gpu:
1582
+ gpu_id = 0
1583
+ if isinstance(self.data_parallel_device_ids, list):
1584
+ gpu_id = self.data_parallel_device_ids[0]
1585
+ batch = self.transfer_batch_to_gpu(copy.copy(batch), gpu_id)
1586
+ args[0] = batch
1587
+ output = self.model.training_step(*args)
1588
+ # CPU forward
1589
+ else:
1590
+ output = self.model.training_step(*args)
1591
+
1592
+ # allow any mode to define training_end
1593
+ model_ref = self.get_model()
1594
+ output_ = model_ref.training_end(output)
1595
+ if output_ is not None:
1596
+ output = output_
1597
+
1598
+ # format and reduce outputs accordingly
1599
+ output = self.process_output(output, train=True)
1600
+
1601
+ return output
1602
+
1603
+ # ---------------
1604
+ # Utils
1605
+ # ---------------
1606
+ def is_function_implemented(self, f_name):
1607
+ model = self.get_model()
1608
+ f_op = getattr(model, f_name, None)
1609
+ return callable(f_op)
1610
+
1611
+ def _percent_range_check(self, name):
1612
+ value = getattr(self, name)
1613
+ msg = f"`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}."
1614
+ if name == "val_check_interval":
1615
+ msg += " If you want to disable validation set `val_percent_check` to 0.0 instead."
1616
+
1617
+ if not 0. <= value <= 1.:
1618
+ raise ValueError(msg)
utils/plot.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import matplotlib.pyplot as plt
2
+ import numpy as np
3
+ import torch
4
+
5
+ LINE_COLORS = ['w', 'r', 'y', 'cyan', 'm', 'b', 'lime']
6
+
7
+
8
+ def spec_to_figure(spec, vmin=None, vmax=None):
9
+ if isinstance(spec, torch.Tensor):
10
+ spec = spec.cpu().numpy()
11
+ fig = plt.figure(figsize=(12, 6))
12
+ plt.pcolor(spec.T, vmin=vmin, vmax=vmax)
13
+ return fig
14
+
15
+
16
+ def spec_f0_to_figure(spec, f0s, figsize=None):
17
+ max_y = spec.shape[1]
18
+ if isinstance(spec, torch.Tensor):
19
+ spec = spec.detach().cpu().numpy()
20
+ f0s = {k: f0.detach().cpu().numpy() for k, f0 in f0s.items()}
21
+ f0s = {k: f0 / 10 for k, f0 in f0s.items()}
22
+ fig = plt.figure(figsize=(12, 6) if figsize is None else figsize)
23
+ plt.pcolor(spec.T)
24
+ for i, (k, f0) in enumerate(f0s.items()):
25
+ plt.plot(f0.clip(0, max_y), label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.8)
26
+ plt.legend()
27
+ return fig
28
+
29
+
30
+ def dur_to_figure(dur_gt, dur_pred, txt):
31
+ dur_gt = dur_gt.long().cpu().numpy()
32
+ dur_pred = dur_pred.long().cpu().numpy()
33
+ dur_gt = np.cumsum(dur_gt)
34
+ dur_pred = np.cumsum(dur_pred)
35
+ fig = plt.figure(figsize=(12, 6))
36
+ for i in range(len(dur_gt)):
37
+ shift = (i % 8) + 1
38
+ plt.text(dur_gt[i], shift, txt[i])
39
+ plt.text(dur_pred[i], 10 + shift, txt[i])
40
+ plt.vlines(dur_gt[i], 0, 10, colors='b') # blue is gt
41
+ plt.vlines(dur_pred[i], 10, 20, colors='r') # red is pred
42
+ return fig
43
+
44
+
45
+ def f0_to_figure(f0_gt, f0_cwt=None, f0_pred=None):
46
+ fig = plt.figure()
47
+ f0_gt = f0_gt.cpu().numpy()
48
+ plt.plot(f0_gt, color='r', label='gt')
49
+ if f0_cwt is not None:
50
+ f0_cwt = f0_cwt.cpu().numpy()
51
+ plt.plot(f0_cwt, color='b', label='cwt')
52
+ if f0_pred is not None:
53
+ f0_pred = f0_pred.cpu().numpy()
54
+ plt.plot(f0_pred, color='green', label='pred')
55
+ plt.legend()
56
+ return fig
utils/rnnoise.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # rnnoise.py, requirements: ffmpeg, sox, rnnoise, python
2
+ import os
3
+ import subprocess
4
+
5
+ INSTALL_STR = """
6
+ RNNoise library not found. Please install RNNoise (https://github.com/xiph/rnnoise) to $REPO/rnnoise:
7
+ sudo apt-get install -y autoconf automake libtool ffmpeg sox
8
+ git clone https://github.com/xiph/rnnoise.git
9
+ rm -rf rnnoise/.git
10
+ cd rnnoise
11
+ ./autogen.sh && ./configure && make
12
+ cd ..
13
+ """
14
+
15
+
16
+ def rnnoise(filename, out_fn=None, verbose=False, out_sample_rate=22050):
17
+ assert os.path.exists('./rnnoise/examples/rnnoise_demo'), INSTALL_STR
18
+ if out_fn is None:
19
+ out_fn = f"{filename[:-4]}.denoised.wav"
20
+ out_48k_fn = f"{out_fn}.48000.wav"
21
+ tmp0_fn = f"{out_fn}.0.wav"
22
+ tmp1_fn = f"{out_fn}.1.wav"
23
+ tmp2_fn = f"{out_fn}.2.raw"
24
+ tmp3_fn = f"{out_fn}.3.raw"
25
+ if verbose:
26
+ print("Pre-processing audio...") # wav to pcm raw
27
+ subprocess.check_call(
28
+ f'sox "{filename}" -G -r48000 "{tmp0_fn}"', shell=True, stdin=subprocess.PIPE) # convert to raw
29
+ subprocess.check_call(
30
+ f'sox -v 0.95 "{tmp0_fn}" "{tmp1_fn}"', shell=True, stdin=subprocess.PIPE) # convert to raw
31
+ subprocess.check_call(
32
+ f'ffmpeg -y -i "{tmp1_fn}" -loglevel quiet -f s16le -ac 1 -ar 48000 "{tmp2_fn}"',
33
+ shell=True, stdin=subprocess.PIPE) # convert to raw
34
+ if verbose:
35
+ print("Applying rnnoise algorithm to audio...") # rnnoise
36
+ subprocess.check_call(
37
+ f'./rnnoise/examples/rnnoise_demo "{tmp2_fn}" "{tmp3_fn}"', shell=True)
38
+
39
+ if verbose:
40
+ print("Post-processing audio...") # pcm raw to wav
41
+ if filename == out_fn:
42
+ subprocess.check_call(f'rm -f "{out_fn}"', shell=True)
43
+ subprocess.check_call(
44
+ f'sox -t raw -r 48000 -b 16 -e signed-integer -c 1 "{tmp3_fn}" "{out_48k_fn}"', shell=True)
45
+ subprocess.check_call(f'sox "{out_48k_fn}" -G -r{out_sample_rate} "{out_fn}"', shell=True)
46
+ subprocess.check_call(f'rm -f "{tmp0_fn}" "{tmp1_fn}" "{tmp2_fn}" "{tmp3_fn}" "{out_48k_fn}"', shell=True)
47
+ if verbose:
48
+ print("Audio-filtering completed!")
utils/text_encoder.py ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import six
3
+ from six.moves import range # pylint: disable=redefined-builtin
4
+
5
+ PAD = "<pad>"
6
+ EOS = "<EOS>"
7
+ UNK = "<UNK>"
8
+ SEG = "|"
9
+ RESERVED_TOKENS = [PAD, EOS, UNK]
10
+ NUM_RESERVED_TOKENS = len(RESERVED_TOKENS)
11
+ PAD_ID = RESERVED_TOKENS.index(PAD) # Normally 0
12
+ EOS_ID = RESERVED_TOKENS.index(EOS) # Normally 1
13
+ UNK_ID = RESERVED_TOKENS.index(UNK) # Normally 2
14
+
15
+ if six.PY2:
16
+ RESERVED_TOKENS_BYTES = RESERVED_TOKENS
17
+ else:
18
+ RESERVED_TOKENS_BYTES = [bytes(PAD, "ascii"), bytes(EOS, "ascii")]
19
+
20
+ # Regular expression for unescaping token strings.
21
+ # '\u' is converted to '_'
22
+ # '\\' is converted to '\'
23
+ # '\213;' is converted to unichr(213)
24
+ _UNESCAPE_REGEX = re.compile(r"\\u|\\\\|\\([0-9]+);")
25
+ _ESCAPE_CHARS = set(u"\\_u;0123456789")
26
+
27
+
28
+ def strip_ids(ids, ids_to_strip):
29
+ """Strip ids_to_strip from the end ids."""
30
+ ids = list(ids)
31
+ while ids and ids[-1] in ids_to_strip:
32
+ ids.pop()
33
+ return ids
34
+
35
+
36
+ class TextEncoder(object):
37
+ """Base class for converting from ints to/from human readable strings."""
38
+
39
+ def __init__(self, num_reserved_ids=NUM_RESERVED_TOKENS):
40
+ self._num_reserved_ids = num_reserved_ids
41
+
42
+ @property
43
+ def num_reserved_ids(self):
44
+ return self._num_reserved_ids
45
+
46
+ def encode(self, s):
47
+ """Transform a human-readable string into a sequence of int ids.
48
+
49
+ The ids should be in the range [num_reserved_ids, vocab_size). Ids [0,
50
+ num_reserved_ids) are reserved.
51
+
52
+ EOS is not appended.
53
+
54
+ Args:
55
+ s: human-readable string to be converted.
56
+
57
+ Returns:
58
+ ids: list of integers
59
+ """
60
+ return [int(w) + self._num_reserved_ids for w in s.split()]
61
+
62
+ def decode(self, ids, strip_extraneous=False):
63
+ """Transform a sequence of int ids into a human-readable string.
64
+
65
+ EOS is not expected in ids.
66
+
67
+ Args:
68
+ ids: list of integers to be converted.
69
+ strip_extraneous: bool, whether to strip off extraneous tokens
70
+ (EOS and PAD).
71
+
72
+ Returns:
73
+ s: human-readable string.
74
+ """
75
+ if strip_extraneous:
76
+ ids = strip_ids(ids, list(range(self._num_reserved_ids or 0)))
77
+ return " ".join(self.decode_list(ids))
78
+
79
+ def decode_list(self, ids):
80
+ """Transform a sequence of int ids into a their string versions.
81
+
82
+ This method supports transforming individual input/output ids to their
83
+ string versions so that sequence to/from text conversions can be visualized
84
+ in a human readable format.
85
+
86
+ Args:
87
+ ids: list of integers to be converted.
88
+
89
+ Returns:
90
+ strs: list of human-readable string.
91
+ """
92
+ decoded_ids = []
93
+ for id_ in ids:
94
+ if 0 <= id_ < self._num_reserved_ids:
95
+ decoded_ids.append(RESERVED_TOKENS[int(id_)])
96
+ else:
97
+ decoded_ids.append(id_ - self._num_reserved_ids)
98
+ return [str(d) for d in decoded_ids]
99
+
100
+ @property
101
+ def vocab_size(self):
102
+ raise NotImplementedError()
103
+
104
+
105
+ class ByteTextEncoder(TextEncoder):
106
+ """Encodes each byte to an id. For 8-bit strings only."""
107
+
108
+ def encode(self, s):
109
+ numres = self._num_reserved_ids
110
+ if six.PY2:
111
+ if isinstance(s, unicode):
112
+ s = s.encode("utf-8")
113
+ return [ord(c) + numres for c in s]
114
+ # Python3: explicitly convert to UTF-8
115
+ return [c + numres for c in s.encode("utf-8")]
116
+
117
+ def decode(self, ids, strip_extraneous=False):
118
+ if strip_extraneous:
119
+ ids = strip_ids(ids, list(range(self._num_reserved_ids or 0)))
120
+ numres = self._num_reserved_ids
121
+ decoded_ids = []
122
+ int2byte = six.int2byte
123
+ for id_ in ids:
124
+ if 0 <= id_ < numres:
125
+ decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)])
126
+ else:
127
+ decoded_ids.append(int2byte(id_ - numres))
128
+ if six.PY2:
129
+ return "".join(decoded_ids)
130
+ # Python3: join byte arrays and then decode string
131
+ return b"".join(decoded_ids).decode("utf-8", "replace")
132
+
133
+ def decode_list(self, ids):
134
+ numres = self._num_reserved_ids
135
+ decoded_ids = []
136
+ int2byte = six.int2byte
137
+ for id_ in ids:
138
+ if 0 <= id_ < numres:
139
+ decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)])
140
+ else:
141
+ decoded_ids.append(int2byte(id_ - numres))
142
+ # Python3: join byte arrays and then decode string
143
+ return decoded_ids
144
+
145
+ @property
146
+ def vocab_size(self):
147
+ return 2**8 + self._num_reserved_ids
148
+
149
+
150
+ class ByteTextEncoderWithEos(ByteTextEncoder):
151
+ """Encodes each byte to an id and appends the EOS token."""
152
+
153
+ def encode(self, s):
154
+ return super(ByteTextEncoderWithEos, self).encode(s) + [EOS_ID]
155
+
156
+
157
+ class TokenTextEncoder(TextEncoder):
158
+ """Encoder based on a user-supplied vocabulary (file or list)."""
159
+
160
+ def __init__(self,
161
+ vocab_filename,
162
+ reverse=False,
163
+ vocab_list=None,
164
+ replace_oov=None,
165
+ num_reserved_ids=NUM_RESERVED_TOKENS):
166
+ """Initialize from a file or list, one token per line.
167
+
168
+ Handling of reserved tokens works as follows:
169
+ - When initializing from a list, we add reserved tokens to the vocab.
170
+ - When initializing from a file, we do not add reserved tokens to the vocab.
171
+ - When saving vocab files, we save reserved tokens to the file.
172
+
173
+ Args:
174
+ vocab_filename: If not None, the full filename to read vocab from. If this
175
+ is not None, then vocab_list should be None.
176
+ reverse: Boolean indicating if tokens should be reversed during encoding
177
+ and decoding.
178
+ vocab_list: If not None, a list of elements of the vocabulary. If this is
179
+ not None, then vocab_filename should be None.
180
+ replace_oov: If not None, every out-of-vocabulary token seen when
181
+ encoding will be replaced by this string (which must be in vocab).
182
+ num_reserved_ids: Number of IDs to save for reserved tokens like <EOS>.
183
+ """
184
+ super(TokenTextEncoder, self).__init__(num_reserved_ids=num_reserved_ids)
185
+ self._reverse = reverse
186
+ self._replace_oov = replace_oov
187
+ if vocab_filename:
188
+ self._init_vocab_from_file(vocab_filename)
189
+ else:
190
+ assert vocab_list is not None
191
+ self._init_vocab_from_list(vocab_list)
192
+ self.pad_index = self._token_to_id[PAD]
193
+ self.eos_index = self._token_to_id[EOS]
194
+ self.unk_index = self._token_to_id[UNK]
195
+ self.seg_index = self._token_to_id[SEG] if SEG in self._token_to_id else self.eos_index
196
+
197
+ def encode(self, s):
198
+ """Converts a space-separated string of tokens to a list of ids."""
199
+ sentence = s
200
+ tokens = sentence.strip().split()
201
+ if self._replace_oov is not None:
202
+ tokens = [t if t in self._token_to_id else self._replace_oov
203
+ for t in tokens]
204
+ ret = [self._token_to_id[tok] for tok in tokens]
205
+ return ret[::-1] if self._reverse else ret
206
+
207
+ def decode(self, ids, strip_eos=False, strip_padding=False):
208
+ if strip_padding and self.pad() in list(ids):
209
+ pad_pos = list(ids).index(self.pad())
210
+ ids = ids[:pad_pos]
211
+ if strip_eos and self.eos() in list(ids):
212
+ eos_pos = list(ids).index(self.eos())
213
+ ids = ids[:eos_pos]
214
+ return " ".join(self.decode_list(ids))
215
+
216
+ def decode_list(self, ids):
217
+ seq = reversed(ids) if self._reverse else ids
218
+ return [self._safe_id_to_token(i) for i in seq]
219
+
220
+ @property
221
+ def vocab_size(self):
222
+ return len(self._id_to_token)
223
+
224
+ def __len__(self):
225
+ return self.vocab_size
226
+
227
+ def _safe_id_to_token(self, idx):
228
+ return self._id_to_token.get(idx, "ID_%d" % idx)
229
+
230
+ def _init_vocab_from_file(self, filename):
231
+ """Load vocab from a file.
232
+
233
+ Args:
234
+ filename: The file to load vocabulary from.
235
+ """
236
+ with open(filename) as f:
237
+ tokens = [token.strip() for token in f.readlines()]
238
+
239
+ def token_gen():
240
+ for token in tokens:
241
+ yield token
242
+
243
+ self._init_vocab(token_gen(), add_reserved_tokens=False)
244
+
245
+ def _init_vocab_from_list(self, vocab_list):
246
+ """Initialize tokens from a list of tokens.
247
+
248
+ It is ok if reserved tokens appear in the vocab list. They will be
249
+ removed. The set of tokens in vocab_list should be unique.
250
+
251
+ Args:
252
+ vocab_list: A list of tokens.
253
+ """
254
+ def token_gen():
255
+ for token in vocab_list:
256
+ if token not in RESERVED_TOKENS:
257
+ yield token
258
+
259
+ self._init_vocab(token_gen())
260
+
261
+ def _init_vocab(self, token_generator, add_reserved_tokens=True):
262
+ """Initialize vocabulary with tokens from token_generator."""
263
+
264
+ self._id_to_token = {}
265
+ non_reserved_start_index = 0
266
+
267
+ if add_reserved_tokens:
268
+ self._id_to_token.update(enumerate(RESERVED_TOKENS))
269
+ non_reserved_start_index = len(RESERVED_TOKENS)
270
+
271
+ self._id_to_token.update(
272
+ enumerate(token_generator, start=non_reserved_start_index))
273
+
274
+ # _token_to_id is the reverse of _id_to_token
275
+ self._token_to_id = dict((v, k)
276
+ for k, v in six.iteritems(self._id_to_token))
277
+
278
+ def pad(self):
279
+ return self.pad_index
280
+
281
+ def eos(self):
282
+ return self.eos_index
283
+
284
+ def unk(self):
285
+ return self.unk_index
286
+
287
+ def seg(self):
288
+ return self.seg_index
289
+
290
+ def store_to_file(self, filename):
291
+ """Write vocab file to disk.
292
+
293
+ Vocab files have one token per line. The file ends in a newline. Reserved
294
+ tokens are written to the vocab file as well.
295
+
296
+ Args:
297
+ filename: Full path of the file to store the vocab to.
298
+ """
299
+ with open(filename, "w") as f:
300
+ for i in range(len(self._id_to_token)):
301
+ f.write(self._id_to_token[i] + "\n")
302
+
303
+ def sil_phonemes(self):
304
+ return [p for p in self._id_to_token.values() if not p[0].isalpha()]
utils/text_norm.py ADDED
@@ -0,0 +1,790 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Authors:
3
+ # 2019.5 Zhiyang Zhou (https://github.com/Joee1995/chn_text_norm.git)
4
+ # 2019.9 Jiayu DU
5
+ #
6
+ # requirements:
7
+ # - python 3.X
8
+ # notes: python 2.X WILL fail or produce misleading results
9
+
10
+ import sys, os, argparse, codecs, string, re
11
+
12
+ # ================================================================================ #
13
+ # basic constant
14
+ # ================================================================================ #
15
+ CHINESE_DIGIS = u'零一二三四五六七八九'
16
+ BIG_CHINESE_DIGIS_SIMPLIFIED = u'零壹贰叁肆伍陆柒捌玖'
17
+ BIG_CHINESE_DIGIS_TRADITIONAL = u'零壹貳參肆伍陸柒捌玖'
18
+ SMALLER_BIG_CHINESE_UNITS_SIMPLIFIED = u'十百千万'
19
+ SMALLER_BIG_CHINESE_UNITS_TRADITIONAL = u'拾佰仟萬'
20
+ LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'亿兆京垓秭穰沟涧正载'
21
+ LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'億兆京垓秭穰溝澗正載'
22
+ SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'十百千万'
23
+ SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'拾佰仟萬'
24
+
25
+ ZERO_ALT = u'〇'
26
+ ONE_ALT = u'幺'
27
+ TWO_ALTS = [u'两', u'兩']
28
+
29
+ POSITIVE = [u'正', u'正']
30
+ NEGATIVE = [u'负', u'負']
31
+ POINT = [u'点', u'點']
32
+ # PLUS = [u'加', u'加']
33
+ # SIL = [u'杠', u'槓']
34
+
35
+ # 中文数字系统类型
36
+ NUMBERING_TYPES = ['low', 'mid', 'high']
37
+
38
+ CURRENCY_NAMES = '(人民币|美元|日元|英镑|欧元|马克|法郎|加拿大元|澳元|港币|先令|芬兰马克|爱尔兰镑|' \
39
+ '里拉|荷兰盾|埃斯库多|比塞塔|印尼盾|林吉特|新西兰元|比索|卢布|新加坡元|韩元|泰铢)'
40
+ CURRENCY_UNITS = '((亿|千万|百万|万|千|百)|(亿|千万|百万|万|千|百|)元|(亿|千万|百万|万|千|百|)块|角|毛|分)'
41
+ COM_QUANTIFIERS = '(匹|张|座|回|场|尾|条|个|首|阙|阵|网|炮|顶|丘|棵|只|支|袭|辆|挑|担|颗|壳|窠|曲|墙|群|腔|' \
42
+ '砣|座|客|贯|扎|捆|刀|令|打|手|罗|坡|山|岭|江|溪|钟|队|单|双|对|出|口|头|脚|板|跳|枝|件|贴|' \
43
+ '针|线|管|名|位|身|堂|课|本|页|家|户|层|丝|毫|厘|分|钱|两|斤|担|铢|石|钧|锱|忽|(千|毫|微)克|' \
44
+ '毫|厘|分|寸|尺|丈|里|寻|常|铺|程|(千|分|厘|毫|微)米|撮|勺|合|升|斗|石|盘|碗|碟|叠|桶|笼|盆|' \
45
+ '盒|杯|钟|斛|锅|簋|篮|盘|桶|罐|瓶|壶|卮|盏|箩|箱|煲|啖|袋|钵|年|月|日|季|刻|时|周|天|秒|分|旬|' \
46
+ '纪|岁|世|更|夜|春|夏|秋|冬|代|伏|辈|丸|泡|粒|颗|幢|堆|条|根|支|道|面|片|张|颗|块)'
47
+
48
+ # punctuation information are based on Zhon project (https://github.com/tsroten/zhon.git)
49
+ CHINESE_PUNC_STOP = '!?。。'
50
+ CHINESE_PUNC_NON_STOP = '"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏'
51
+ CHINESE_PUNC_LIST = CHINESE_PUNC_STOP + CHINESE_PUNC_NON_STOP
52
+
53
+
54
+ # ================================================================================ #
55
+ # basic class
56
+ # ================================================================================ #
57
+ class ChineseChar(object):
58
+ """
59
+ 中文字符
60
+ 每个字符对应简体和繁体,
61
+ e.g. 简体 = '负', 繁体 = '負'
62
+ 转换时可转换为简体或繁体
63
+ """
64
+
65
+ def __init__(self, simplified, traditional):
66
+ self.simplified = simplified
67
+ self.traditional = traditional
68
+ # self.__repr__ = self.__str__
69
+
70
+ def __str__(self):
71
+ return self.simplified or self.traditional or None
72
+
73
+ def __repr__(self):
74
+ return self.__str__()
75
+
76
+
77
+ class ChineseNumberUnit(ChineseChar):
78
+ """
79
+ 中文数字/数位字符
80
+ 每个字符除繁简体外还有一个额外的大写字符
81
+ e.g. '陆' 和 '陸'
82
+ """
83
+
84
+ def __init__(self, power, simplified, traditional, big_s, big_t):
85
+ super(ChineseNumberUnit, self).__init__(simplified, traditional)
86
+ self.power = power
87
+ self.big_s = big_s
88
+ self.big_t = big_t
89
+
90
+ def __str__(self):
91
+ return '10^{}'.format(self.power)
92
+
93
+ @classmethod
94
+ def create(cls, index, value, numbering_type=NUMBERING_TYPES[1], small_unit=False):
95
+
96
+ if small_unit:
97
+ return ChineseNumberUnit(power=index + 1,
98
+ simplified=value[0], traditional=value[1], big_s=value[1], big_t=value[1])
99
+ elif numbering_type == NUMBERING_TYPES[0]:
100
+ return ChineseNumberUnit(power=index + 8,
101
+ simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
102
+ elif numbering_type == NUMBERING_TYPES[1]:
103
+ return ChineseNumberUnit(power=(index + 2) * 4,
104
+ simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
105
+ elif numbering_type == NUMBERING_TYPES[2]:
106
+ return ChineseNumberUnit(power=pow(2, index + 3),
107
+ simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1])
108
+ else:
109
+ raise ValueError(
110
+ 'Counting type should be in {0} ({1} provided).'.format(NUMBERING_TYPES, numbering_type))
111
+
112
+
113
+ class ChineseNumberDigit(ChineseChar):
114
+ """
115
+ 中文数字字符
116
+ """
117
+
118
+ def __init__(self, value, simplified, traditional, big_s, big_t, alt_s=None, alt_t=None):
119
+ super(ChineseNumberDigit, self).__init__(simplified, traditional)
120
+ self.value = value
121
+ self.big_s = big_s
122
+ self.big_t = big_t
123
+ self.alt_s = alt_s
124
+ self.alt_t = alt_t
125
+
126
+ def __str__(self):
127
+ return str(self.value)
128
+
129
+ @classmethod
130
+ def create(cls, i, v):
131
+ return ChineseNumberDigit(i, v[0], v[1], v[2], v[3])
132
+
133
+
134
+ class ChineseMath(ChineseChar):
135
+ """
136
+ 中文数位字符
137
+ """
138
+
139
+ def __init__(self, simplified, traditional, symbol, expression=None):
140
+ super(ChineseMath, self).__init__(simplified, traditional)
141
+ self.symbol = symbol
142
+ self.expression = expression
143
+ self.big_s = simplified
144
+ self.big_t = traditional
145
+
146
+
147
+ CC, CNU, CND, CM = ChineseChar, ChineseNumberUnit, ChineseNumberDigit, ChineseMath
148
+
149
+
150
+ class NumberSystem(object):
151
+ """
152
+ 中文数字系统
153
+ """
154
+ pass
155
+
156
+
157
+ class MathSymbol(object):
158
+ """
159
+ 用于中文数字系统的数学符号 (繁/简体), e.g.
160
+ positive = ['正', '正']
161
+ negative = ['负', '負']
162
+ point = ['点', '點']
163
+ """
164
+
165
+ def __init__(self, positive, negative, point):
166
+ self.positive = positive
167
+ self.negative = negative
168
+ self.point = point
169
+
170
+ def __iter__(self):
171
+ for v in self.__dict__.values():
172
+ yield v
173
+
174
+
175
+ # class OtherSymbol(object):
176
+ # """
177
+ # 其他符号
178
+ # """
179
+ #
180
+ # def __init__(self, sil):
181
+ # self.sil = sil
182
+ #
183
+ # def __iter__(self):
184
+ # for v in self.__dict__.values():
185
+ # yield v
186
+
187
+
188
+ # ================================================================================ #
189
+ # basic utils
190
+ # ================================================================================ #
191
+ def create_system(numbering_type=NUMBERING_TYPES[1]):
192
+ """
193
+ 根据数字系统类型返回创建相应的数字系统,默认为 mid
194
+ NUMBERING_TYPES = ['low', 'mid', 'high']: 中文数字系统类型
195
+ low: '兆' = '亿' * '十' = $10^{9}$, '京' = '兆' * '十', etc.
196
+ mid: '兆' = '亿' * '万' = $10^{12}$, '京' = '兆' * '万', etc.
197
+ high: '兆' = '亿' * '亿' = $10^{16}$, '京' = '兆' * '兆', etc.
198
+ 返回对应的数字系统
199
+ """
200
+
201
+ # chinese number units of '亿' and larger
202
+ all_larger_units = zip(
203
+ LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED, LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL)
204
+ larger_units = [CNU.create(i, v, numbering_type, False)
205
+ for i, v in enumerate(all_larger_units)]
206
+ # chinese number units of '十, 百, 千, 万'
207
+ all_smaller_units = zip(
208
+ SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED, SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL)
209
+ smaller_units = [CNU.create(i, v, small_unit=True)
210
+ for i, v in enumerate(all_smaller_units)]
211
+ # digis
212
+ chinese_digis = zip(CHINESE_DIGIS, CHINESE_DIGIS,
213
+ BIG_CHINESE_DIGIS_SIMPLIFIED, BIG_CHINESE_DIGIS_TRADITIONAL)
214
+ digits = [CND.create(i, v) for i, v in enumerate(chinese_digis)]
215
+ digits[0].alt_s, digits[0].alt_t = ZERO_ALT, ZERO_ALT
216
+ digits[1].alt_s, digits[1].alt_t = ONE_ALT, ONE_ALT
217
+ digits[2].alt_s, digits[2].alt_t = TWO_ALTS[0], TWO_ALTS[1]
218
+
219
+ # symbols
220
+ positive_cn = CM(POSITIVE[0], POSITIVE[1], '+', lambda x: x)
221
+ negative_cn = CM(NEGATIVE[0], NEGATIVE[1], '-', lambda x: -x)
222
+ point_cn = CM(POINT[0], POINT[1], '.', lambda x,
223
+ y: float(str(x) + '.' + str(y)))
224
+ # sil_cn = CM(SIL[0], SIL[1], '-', lambda x, y: float(str(x) + '-' + str(y)))
225
+ system = NumberSystem()
226
+ system.units = smaller_units + larger_units
227
+ system.digits = digits
228
+ system.math = MathSymbol(positive_cn, negative_cn, point_cn)
229
+ # system.symbols = OtherSymbol(sil_cn)
230
+ return system
231
+
232
+
233
+ def chn2num(chinese_string, numbering_type=NUMBERING_TYPES[1]):
234
+ def get_symbol(char, system):
235
+ for u in system.units:
236
+ if char in [u.traditional, u.simplified, u.big_s, u.big_t]:
237
+ return u
238
+ for d in system.digits:
239
+ if char in [d.traditional, d.simplified, d.big_s, d.big_t, d.alt_s, d.alt_t]:
240
+ return d
241
+ for m in system.math:
242
+ if char in [m.traditional, m.simplified]:
243
+ return m
244
+
245
+ def string2symbols(chinese_string, system):
246
+ int_string, dec_string = chinese_string, ''
247
+ for p in [system.math.point.simplified, system.math.point.traditional]:
248
+ if p in chinese_string:
249
+ int_string, dec_string = chinese_string.split(p)
250
+ break
251
+ return [get_symbol(c, system) for c in int_string], \
252
+ [get_symbol(c, system) for c in dec_string]
253
+
254
+ def correct_symbols(integer_symbols, system):
255
+ """
256
+ 一百八 to 一百八十
257
+ 一亿一千三百万 to 一亿 一千万 三百万
258
+ """
259
+
260
+ if integer_symbols and isinstance(integer_symbols[0], CNU):
261
+ if integer_symbols[0].power == 1:
262
+ integer_symbols = [system.digits[1]] + integer_symbols
263
+
264
+ if len(integer_symbols) > 1:
265
+ if isinstance(integer_symbols[-1], CND) and isinstance(integer_symbols[-2], CNU):
266
+ integer_symbols.append(
267
+ CNU(integer_symbols[-2].power - 1, None, None, None, None))
268
+
269
+ result = []
270
+ unit_count = 0
271
+ for s in integer_symbols:
272
+ if isinstance(s, CND):
273
+ result.append(s)
274
+ unit_count = 0
275
+ elif isinstance(s, CNU):
276
+ current_unit = CNU(s.power, None, None, None, None)
277
+ unit_count += 1
278
+
279
+ if unit_count == 1:
280
+ result.append(current_unit)
281
+ elif unit_count > 1:
282
+ for i in range(len(result)):
283
+ if isinstance(result[-i - 1], CNU) and result[-i - 1].power < current_unit.power:
284
+ result[-i - 1] = CNU(result[-i - 1].power +
285
+ current_unit.power, None, None, None, None)
286
+ return result
287
+
288
+ def compute_value(integer_symbols):
289
+ """
290
+ Compute the value.
291
+ When current unit is larger than previous unit, current unit * all previous units will be used as all previous units.
292
+ e.g. '两千万' = 2000 * 10000 not 2000 + 10000
293
+ """
294
+ value = [0]
295
+ last_power = 0
296
+ for s in integer_symbols:
297
+ if isinstance(s, CND):
298
+ value[-1] = s.value
299
+ elif isinstance(s, CNU):
300
+ value[-1] *= pow(10, s.power)
301
+ if s.power > last_power:
302
+ value[:-1] = list(map(lambda v: v *
303
+ pow(10, s.power), value[:-1]))
304
+ last_power = s.power
305
+ value.append(0)
306
+ return sum(value)
307
+
308
+ system = create_system(numbering_type)
309
+ int_part, dec_part = string2symbols(chinese_string, system)
310
+ int_part = correct_symbols(int_part, system)
311
+ int_str = str(compute_value(int_part))
312
+ dec_str = ''.join([str(d.value) for d in dec_part])
313
+ if dec_part:
314
+ return '{0}.{1}'.format(int_str, dec_str)
315
+ else:
316
+ return int_str
317
+
318
+
319
+ def num2chn(number_string, numbering_type=NUMBERING_TYPES[1], big=False,
320
+ traditional=False, alt_zero=False, alt_one=False, alt_two=True,
321
+ use_zeros=True, use_units=True):
322
+ def get_value(value_string, use_zeros=True):
323
+
324
+ striped_string = value_string.lstrip('0')
325
+
326
+ # record nothing if all zeros
327
+ if not striped_string:
328
+ return []
329
+
330
+ # record one digits
331
+ elif len(striped_string) == 1:
332
+ if use_zeros and len(value_string) != len(striped_string):
333
+ return [system.digits[0], system.digits[int(striped_string)]]
334
+ else:
335
+ return [system.digits[int(striped_string)]]
336
+
337
+ # recursively record multiple digits
338
+ else:
339
+ result_unit = next(u for u in reversed(
340
+ system.units) if u.power < len(striped_string))
341
+ result_string = value_string[:-result_unit.power]
342
+ return get_value(result_string) + [result_unit] + get_value(striped_string[-result_unit.power:])
343
+
344
+ system = create_system(numbering_type)
345
+
346
+ int_dec = number_string.split('.')
347
+ if len(int_dec) == 1:
348
+ int_string = int_dec[0]
349
+ dec_string = ""
350
+ elif len(int_dec) == 2:
351
+ int_string = int_dec[0]
352
+ dec_string = int_dec[1]
353
+ else:
354
+ raise ValueError(
355
+ "invalid input num string with more than one dot: {}".format(number_string))
356
+
357
+ if use_units and len(int_string) > 1:
358
+ result_symbols = get_value(int_string)
359
+ else:
360
+ result_symbols = [system.digits[int(c)] for c in int_string]
361
+ dec_symbols = [system.digits[int(c)] for c in dec_string]
362
+ if dec_string:
363
+ result_symbols += [system.math.point] + dec_symbols
364
+
365
+ if alt_two:
366
+ liang = CND(2, system.digits[2].alt_s, system.digits[2].alt_t,
367
+ system.digits[2].big_s, system.digits[2].big_t)
368
+ for i, v in enumerate(result_symbols):
369
+ if isinstance(v, CND) and v.value == 2:
370
+ next_symbol = result_symbols[i +
371
+ 1] if i < len(result_symbols) - 1 else None
372
+ previous_symbol = result_symbols[i - 1] if i > 0 else None
373
+ if isinstance(next_symbol, CNU) and isinstance(previous_symbol, (CNU, type(None))):
374
+ if next_symbol.power != 1 and ((previous_symbol is None) or (previous_symbol.power != 1)):
375
+ result_symbols[i] = liang
376
+
377
+ # if big is True, '两' will not be used and `alt_two` has no impact on output
378
+ if big:
379
+ attr_name = 'big_'
380
+ if traditional:
381
+ attr_name += 't'
382
+ else:
383
+ attr_name += 's'
384
+ else:
385
+ if traditional:
386
+ attr_name = 'traditional'
387
+ else:
388
+ attr_name = 'simplified'
389
+
390
+ result = ''.join([getattr(s, attr_name) for s in result_symbols])
391
+
392
+ # if not use_zeros:
393
+ # result = result.strip(getattr(system.digits[0], attr_name))
394
+
395
+ if alt_zero:
396
+ result = result.replace(
397
+ getattr(system.digits[0], attr_name), system.digits[0].alt_s)
398
+
399
+ if alt_one:
400
+ result = result.replace(
401
+ getattr(system.digits[1], attr_name), system.digits[1].alt_s)
402
+
403
+ for i, p in enumerate(POINT):
404
+ if result.startswith(p):
405
+ return CHINESE_DIGIS[0] + result
406
+
407
+ # ^10, 11, .., 19
408
+ if len(result) >= 2 and result[1] in [SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED[0],
409
+ SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL[0]] and \
410
+ result[0] in [CHINESE_DIGIS[1], BIG_CHINESE_DIGIS_SIMPLIFIED[1], BIG_CHINESE_DIGIS_TRADITIONAL[1]]:
411
+ result = result[1:]
412
+
413
+ return result
414
+
415
+
416
+ # ================================================================================ #
417
+ # different types of rewriters
418
+ # ================================================================================ #
419
+ class Cardinal:
420
+ """
421
+ CARDINAL类
422
+ """
423
+
424
+ def __init__(self, cardinal=None, chntext=None):
425
+ self.cardinal = cardinal
426
+ self.chntext = chntext
427
+
428
+ def chntext2cardinal(self):
429
+ return chn2num(self.chntext)
430
+
431
+ def cardinal2chntext(self):
432
+ return num2chn(self.cardinal)
433
+
434
+
435
+ class Digit:
436
+ """
437
+ DIGIT类
438
+ """
439
+
440
+ def __init__(self, digit=None, chntext=None):
441
+ self.digit = digit
442
+ self.chntext = chntext
443
+
444
+ # def chntext2digit(self):
445
+ # return chn2num(self.chntext)
446
+
447
+ def digit2chntext(self):
448
+ return num2chn(self.digit, alt_two=False, use_units=False)
449
+
450
+
451
+ class TelePhone:
452
+ """
453
+ TELEPHONE类
454
+ """
455
+
456
+ def __init__(self, telephone=None, raw_chntext=None, chntext=None):
457
+ self.telephone = telephone
458
+ self.raw_chntext = raw_chntext
459
+ self.chntext = chntext
460
+
461
+ # def chntext2telephone(self):
462
+ # sil_parts = self.raw_chntext.split('<SIL>')
463
+ # self.telephone = '-'.join([
464
+ # str(chn2num(p)) for p in sil_parts
465
+ # ])
466
+ # return self.telephone
467
+
468
+ def telephone2chntext(self, fixed=False):
469
+
470
+ if fixed:
471
+ sil_parts = self.telephone.split('-')
472
+ self.raw_chntext = '<SIL>'.join([
473
+ num2chn(part, alt_two=False, use_units=False) for part in sil_parts
474
+ ])
475
+ self.chntext = self.raw_chntext.replace('<SIL>', '')
476
+ else:
477
+ sp_parts = self.telephone.strip('+').split()
478
+ self.raw_chntext = '<SP>'.join([
479
+ num2chn(part, alt_two=False, use_units=False) for part in sp_parts
480
+ ])
481
+ self.chntext = self.raw_chntext.replace('<SP>', '')
482
+ return self.chntext
483
+
484
+
485
+ class Fraction:
486
+ """
487
+ FRACTION类
488
+ """
489
+
490
+ def __init__(self, fraction=None, chntext=None):
491
+ self.fraction = fraction
492
+ self.chntext = chntext
493
+
494
+ def chntext2fraction(self):
495
+ denominator, numerator = self.chntext.split('分之')
496
+ return chn2num(numerator) + '/' + chn2num(denominator)
497
+
498
+ def fraction2chntext(self):
499
+ numerator, denominator = self.fraction.split('/')
500
+ return num2chn(denominator) + '分之' + num2chn(numerator)
501
+
502
+
503
+ class Date:
504
+ """
505
+ DATE类
506
+ """
507
+
508
+ def __init__(self, date=None, chntext=None):
509
+ self.date = date
510
+ self.chntext = chntext
511
+
512
+ # def chntext2date(self):
513
+ # chntext = self.chntext
514
+ # try:
515
+ # year, other = chntext.strip().split('年', maxsplit=1)
516
+ # year = Digit(chntext=year).digit2chntext() + '年'
517
+ # except ValueError:
518
+ # other = chntext
519
+ # year = ''
520
+ # if other:
521
+ # try:
522
+ # month, day = other.strip().split('月', maxsplit=1)
523
+ # month = Cardinal(chntext=month).chntext2cardinal() + '月'
524
+ # except ValueError:
525
+ # day = chntext
526
+ # month = ''
527
+ # if day:
528
+ # day = Cardinal(chntext=day[:-1]).chntext2cardinal() + day[-1]
529
+ # else:
530
+ # month = ''
531
+ # day = ''
532
+ # date = year + month + day
533
+ # self.date = date
534
+ # return self.date
535
+
536
+ def date2chntext(self):
537
+ date = self.date
538
+ try:
539
+ year, other = date.strip().split('年', 1)
540
+ year = Digit(digit=year).digit2chntext() + '年'
541
+ except ValueError:
542
+ other = date
543
+ year = ''
544
+ if other:
545
+ try:
546
+ month, day = other.strip().split('月', 1)
547
+ month = Cardinal(cardinal=month).cardinal2chntext() + '月'
548
+ except ValueError:
549
+ day = date
550
+ month = ''
551
+ if day:
552
+ day = Cardinal(cardinal=day[:-1]).cardinal2chntext() + day[-1]
553
+ else:
554
+ month = ''
555
+ day = ''
556
+ chntext = year + month + day
557
+ self.chntext = chntext
558
+ return self.chntext
559
+
560
+
561
+ class Money:
562
+ """
563
+ MONEY类
564
+ """
565
+
566
+ def __init__(self, money=None, chntext=None):
567
+ self.money = money
568
+ self.chntext = chntext
569
+
570
+ # def chntext2money(self):
571
+ # return self.money
572
+
573
+ def money2chntext(self):
574
+ money = self.money
575
+ pattern = re.compile(r'(\d+(\.\d+)?)')
576
+ matchers = pattern.findall(money)
577
+ if matchers:
578
+ for matcher in matchers:
579
+ money = money.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext())
580
+ self.chntext = money
581
+ return self.chntext
582
+
583
+
584
+ class Percentage:
585
+ """
586
+ PERCENTAGE类
587
+ """
588
+
589
+ def __init__(self, percentage=None, chntext=None):
590
+ self.percentage = percentage
591
+ self.chntext = chntext
592
+
593
+ def chntext2percentage(self):
594
+ return chn2num(self.chntext.strip().strip('百分之')) + '%'
595
+
596
+ def percentage2chntext(self):
597
+ return '百分之' + num2chn(self.percentage.strip().strip('%'))
598
+
599
+
600
+ # ================================================================================ #
601
+ # NSW Normalizer
602
+ # ================================================================================ #
603
+ class NSWNormalizer:
604
+ def __init__(self, raw_text):
605
+ self.raw_text = '^' + raw_text + '$'
606
+ self.norm_text = ''
607
+
608
+ def _particular(self):
609
+ text = self.norm_text
610
+ pattern = re.compile(r"(([a-zA-Z]+)二([a-zA-Z]+))")
611
+ matchers = pattern.findall(text)
612
+ if matchers:
613
+ # print('particular')
614
+ for matcher in matchers:
615
+ text = text.replace(matcher[0], matcher[1] + '2' + matcher[2], 1)
616
+ self.norm_text = text
617
+ return self.norm_text
618
+
619
+ def normalize(self, remove_punc=True):
620
+ text = self.raw_text
621
+
622
+ # 规范化日期
623
+ pattern = re.compile(r"\D+((([089]\d|(19|20)\d{2})年)?(\d{1,2}月(\d{1,2}[日号])?)?)")
624
+ matchers = pattern.findall(text)
625
+ if matchers:
626
+ # print('date')
627
+ for matcher in matchers:
628
+ text = text.replace(matcher[0], Date(date=matcher[0]).date2chntext(), 1)
629
+
630
+ # 规范化金钱
631
+ pattern = re.compile(r"\D+((\d+(\.\d+)?)[多余几]?" + CURRENCY_UNITS + r"(\d" + CURRENCY_UNITS + r"?)?)")
632
+ matchers = pattern.findall(text)
633
+ if matchers:
634
+ # print('money')
635
+ for matcher in matchers:
636
+ text = text.replace(matcher[0], Money(money=matcher[0]).money2chntext(), 1)
637
+
638
+ # 规范化固话/手机号码
639
+ # 手机
640
+ # http://www.jihaoba.com/news/show/13680
641
+ # 移动:139、138、137、136、135、134、159、158、157、150、151、152、188、187、182、183、184、178、198
642
+ # 联通:130、131、132、156、155、186、185、176
643
+ # 电信:133、153、189、180、181、177
644
+ pattern = re.compile(r"\D((\+?86 ?)?1([38]\d|5[0-35-9]|7[678]|9[89])\d{8})\D")
645
+ matchers = pattern.findall(text)
646
+ if matchers:
647
+ # print('telephone')
648
+ for matcher in matchers:
649
+ text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(), 1)
650
+ # 固话
651
+ pattern = re.compile(r"\D((0(10|2[1-3]|[3-9]\d{2})-?)?[1-9]\d{6,7})\D")
652
+ matchers = pattern.findall(text)
653
+ if matchers:
654
+ # print('fixed telephone')
655
+ for matcher in matchers:
656
+ text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(fixed=True), 1)
657
+
658
+ # 规范化分数
659
+ pattern = re.compile(r"(\d+/\d+)")
660
+ matchers = pattern.findall(text)
661
+ if matchers:
662
+ # print('fraction')
663
+ for matcher in matchers:
664
+ text = text.replace(matcher, Fraction(fraction=matcher).fraction2chntext(), 1)
665
+
666
+ # 规范化百分数
667
+ text = text.replace('%', '%')
668
+ pattern = re.compile(r"(\d+(\.\d+)?%)")
669
+ matchers = pattern.findall(text)
670
+ if matchers:
671
+ # print('percentage')
672
+ for matcher in matchers:
673
+ text = text.replace(matcher[0], Percentage(percentage=matcher[0]).percentage2chntext(), 1)
674
+
675
+ # 规范化纯数+量词
676
+ pattern = re.compile(r"(\d+(\.\d+)?)[多余几]?" + COM_QUANTIFIERS)
677
+ matchers = pattern.findall(text)
678
+ if matchers:
679
+ # print('cardinal+quantifier')
680
+ for matcher in matchers:
681
+ text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1)
682
+
683
+ # 规范化数字编号
684
+ pattern = re.compile(r"(\d{4,32})")
685
+ matchers = pattern.findall(text)
686
+ if matchers:
687
+ # print('digit')
688
+ for matcher in matchers:
689
+ text = text.replace(matcher, Digit(digit=matcher).digit2chntext(), 1)
690
+
691
+ # 规范化纯数
692
+ pattern = re.compile(r"(\d+(\.\d+)?)")
693
+ matchers = pattern.findall(text)
694
+ if matchers:
695
+ # print('cardinal')
696
+ for matcher in matchers:
697
+ text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1)
698
+
699
+ self.norm_text = text
700
+ self._particular()
701
+
702
+ text = self.norm_text.lstrip('^').rstrip('$')
703
+ if remove_punc:
704
+ # Punctuations removal
705
+ old_chars = CHINESE_PUNC_LIST + string.punctuation # includes all CN and EN punctuations
706
+ new_chars = ' ' * len(old_chars)
707
+ del_chars = ''
708
+ text = text.translate(str.maketrans(old_chars, new_chars, del_chars))
709
+ return text
710
+
711
+
712
+ def nsw_test_case(raw_text):
713
+ print('I:' + raw_text)
714
+ print('O:' + NSWNormalizer(raw_text).normalize())
715
+ print('')
716
+
717
+
718
+ def nsw_test():
719
+ nsw_test_case('固话:0595-23865596或23880880。')
720
+ nsw_test_case('固话:0595-23865596或23880880。')
721
+ nsw_test_case('手机:+86 19859213959或15659451527。')
722
+ nsw_test_case('分数:32477/76391。')
723
+ nsw_test_case('百分数:80.03%。')
724
+ nsw_test_case('编号:31520181154418。')
725
+ nsw_test_case('纯数:2983.07克或12345.60米。')
726
+ nsw_test_case('日期:1999年2月20日或09年3月15号。')
727
+ nsw_test_case('金钱:12块5,34.5元,20.1万')
728
+ nsw_test_case('特殊:O2O或B2C。')
729
+ nsw_test_case('3456万吨')
730
+ nsw_test_case('2938个')
731
+ nsw_test_case('938')
732
+ nsw_test_case('今天吃了115个小笼包231个馒头')
733
+ nsw_test_case('有62%的概率')
734
+
735
+
736
+ if __name__ == '__main__':
737
+ # nsw_test()
738
+
739
+ p = argparse.ArgumentParser()
740
+ p.add_argument('ifile', help='input filename, assume utf-8 encoding')
741
+ p.add_argument('ofile', help='output filename')
742
+ p.add_argument('--to_upper', action='store_true', help='convert to upper case')
743
+ p.add_argument('--to_lower', action='store_true', help='convert to lower case')
744
+ p.add_argument('--has_key', action='store_true', help="input text has Kaldi's key as first field.")
745
+ p.add_argument('--log_interval', type=int, default=10000, help='log interval in number of processed lines')
746
+ args = p.parse_args()
747
+
748
+ ifile = codecs.open(args.ifile, 'r', 'utf8')
749
+ ofile = codecs.open(args.ofile, 'w+', 'utf8')
750
+
751
+ n = 0
752
+ for l in ifile:
753
+ key = ''
754
+ text = ''
755
+ if args.has_key:
756
+ cols = l.split(maxsplit=1)
757
+ key = cols[0]
758
+ if len(cols) == 2:
759
+ text = cols[1]
760
+ else:
761
+ text = ''
762
+ else:
763
+ text = l
764
+
765
+ # cases
766
+ if args.to_upper and args.to_lower:
767
+ sys.stderr.write('text norm: to_upper OR to_lower?')
768
+ exit(1)
769
+ if args.to_upper:
770
+ text = text.upper()
771
+ if args.to_lower:
772
+ text = text.lower()
773
+
774
+ # NSW(Non-Standard-Word) normalization
775
+ text = NSWNormalizer(text).normalize()
776
+
777
+ #
778
+ if args.has_key:
779
+ ofile.write(key + '\t' + text)
780
+ else:
781
+ ofile.write(text)
782
+
783
+ n += 1
784
+ if n % args.log_interval == 0:
785
+ sys.stderr.write("text norm: {} lines done.\n".format(n))
786
+
787
+ sys.stderr.write("text norm: {} lines done in total.\n".format(n))
788
+
789
+ ifile.close()
790
+ ofile.close()
utils/trainer.py ADDED
@@ -0,0 +1,518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ from torch.cuda.amp import GradScaler, autocast
3
+ from utils import move_to_cuda
4
+ import subprocess
5
+ import numpy as np
6
+ import torch.optim
7
+ import torch.utils.data
8
+ import copy
9
+ import logging
10
+ import os
11
+ import re
12
+ import sys
13
+ import torch
14
+ import torch.distributed as dist
15
+ import torch.multiprocessing as mp
16
+ import tqdm
17
+
18
+ from utils.ckpt_utils import get_last_checkpoint, get_all_ckpts
19
+ from utils.ddp_utils import DDP
20
+ from utils.hparams import hparams
21
+
22
+
23
+ class Trainer:
24
+ def __init__(
25
+ self,
26
+ work_dir,
27
+ default_save_path=None,
28
+ accumulate_grad_batches=1,
29
+ max_updates=160000,
30
+ print_nan_grads=False,
31
+ val_check_interval=2000,
32
+ num_sanity_val_steps=5,
33
+ amp=False,
34
+ # tb logger
35
+ log_save_interval=100,
36
+ tb_log_interval=10,
37
+ # checkpoint
38
+ monitor_key='val_loss',
39
+ monitor_mode='min',
40
+ num_ckpt_keep=5,
41
+ save_best=True,
42
+ resume_from_checkpoint=0,
43
+ seed=1234,
44
+ debug=False,
45
+ ):
46
+ os.makedirs(work_dir, exist_ok=True)
47
+ self.work_dir = work_dir
48
+ self.accumulate_grad_batches = accumulate_grad_batches
49
+ self.max_updates = max_updates
50
+ self.num_sanity_val_steps = num_sanity_val_steps
51
+ self.print_nan_grads = print_nan_grads
52
+ self.default_save_path = default_save_path
53
+ self.resume_from_checkpoint = resume_from_checkpoint if resume_from_checkpoint > 0 else None
54
+ self.seed = seed
55
+ self.debug = debug
56
+ # model and optm
57
+ self.task = None
58
+ self.optimizers = []
59
+
60
+ # trainer state
61
+ self.testing = False
62
+ self.global_step = 0
63
+ self.current_epoch = 0
64
+ self.total_batches = 0
65
+
66
+ # configure checkpoint
67
+ self.monitor_key = monitor_key
68
+ self.num_ckpt_keep = num_ckpt_keep
69
+ self.save_best = save_best
70
+ self.monitor_op = np.less if monitor_mode == 'min' else np.greater
71
+ self.best_val_results = np.Inf if monitor_mode == 'min' else -np.Inf
72
+ self.mode = 'min'
73
+
74
+ # allow int, string and gpu list
75
+ self.all_gpu_ids = [
76
+ int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != '']
77
+ self.num_gpus = len(self.all_gpu_ids)
78
+ self.on_gpu = self.num_gpus > 0
79
+ self.root_gpu = 0
80
+ logging.info(f'GPU available: {torch.cuda.is_available()}, GPU used: {self.all_gpu_ids}')
81
+ self.use_ddp = self.num_gpus > 1
82
+ self.proc_rank = 0
83
+ # Tensorboard logging
84
+ self.log_save_interval = log_save_interval
85
+ self.val_check_interval = val_check_interval
86
+ self.tb_log_interval = tb_log_interval
87
+ self.amp = amp
88
+ self.amp_scalar = GradScaler()
89
+
90
+ def test(self, task_cls):
91
+ self.testing = True
92
+ self.fit(task_cls)
93
+
94
+ def fit(self, task_cls):
95
+ if len(self.all_gpu_ids) > 1:
96
+ mp.spawn(self.ddp_run, nprocs=self.num_gpus, args=(task_cls, copy.deepcopy(hparams)))
97
+ else:
98
+ self.task = task_cls()
99
+ self.task.trainer = self
100
+ self.run_single_process(self.task)
101
+ return 1
102
+
103
+ def ddp_run(self, gpu_idx, task_cls, hparams_):
104
+ hparams.update(hparams_)
105
+ task = task_cls()
106
+ self.ddp_init(gpu_idx, task)
107
+ self.run_single_process(task)
108
+
109
+ def run_single_process(self, task):
110
+ """Sanity check a few things before starting actual training.
111
+
112
+ :param task:
113
+ """
114
+ # build model, optm and load checkpoint
115
+ model = task.build_model()
116
+ if model is not None:
117
+ task.model = model
118
+ checkpoint, _ = get_last_checkpoint(self.work_dir, self.resume_from_checkpoint)
119
+ if checkpoint is not None:
120
+ self.restore_weights(checkpoint)
121
+ elif self.on_gpu:
122
+ task.cuda(self.root_gpu)
123
+ if not self.testing:
124
+ self.optimizers = task.configure_optimizers()
125
+ self.fisrt_epoch = True
126
+ if checkpoint is not None:
127
+ self.restore_opt_state(checkpoint)
128
+ del checkpoint
129
+ # clear cache after restore
130
+ if self.on_gpu:
131
+ torch.cuda.empty_cache()
132
+
133
+ if self.use_ddp:
134
+ self.task = self.configure_ddp(self.task)
135
+ dist.barrier()
136
+
137
+ task_ref = self.get_task_ref()
138
+ task_ref.trainer = self
139
+ task_ref.testing = self.testing
140
+ # link up experiment object
141
+ if self.proc_rank == 0:
142
+ task_ref.build_tensorboard(save_dir=self.work_dir, name='lightning_logs', version='lastest')
143
+ else:
144
+ os.makedirs('tmp', exist_ok=True)
145
+ task_ref.build_tensorboard(save_dir='tmp', name='tb_tmp', version='lastest')
146
+ self.logger = task_ref.logger
147
+ try:
148
+ if self.testing:
149
+ self.run_evaluation(test=True)
150
+ else:
151
+ self.train()
152
+ except KeyboardInterrupt as e:
153
+ task_ref.on_keyboard_interrupt()
154
+
155
+ ####################
156
+ # valid and test
157
+ ####################
158
+ def run_evaluation(self, test=False):
159
+ eval_results = self.evaluate(self.task, test, tqdm_desc='Valid' if not test else 'test')
160
+ if eval_results is not None and 'tb_log' in eval_results:
161
+ tb_log_output = eval_results['tb_log']
162
+ self.log_metrics_to_tb(tb_log_output)
163
+ if self.proc_rank == 0 and not test:
164
+ self.save_checkpoint(epoch=self.current_epoch, logs=eval_results)
165
+
166
+ def evaluate(self, task, test=False, tqdm_desc='Valid', max_batches=None):
167
+ # enable eval mode
168
+ task.zero_grad()
169
+ task.eval()
170
+ torch.set_grad_enabled(False)
171
+
172
+ task_ref = self.get_task_ref()
173
+ if test:
174
+ ret = task_ref.test_start()
175
+ if ret == 'EXIT':
176
+ return
177
+
178
+ outputs = []
179
+ dataloader = task_ref.test_dataloader() if test else task_ref.val_dataloader()
180
+ pbar = tqdm.tqdm(dataloader, desc=tqdm_desc, total=max_batches, dynamic_ncols=True, unit='step',
181
+ disable=self.root_gpu > 0)
182
+ for batch_idx, batch in enumerate(pbar):
183
+ if batch is None: # pragma: no cover
184
+ continue
185
+ # stop short when on fast_dev_run (sets max_batch=1)
186
+ if max_batches is not None and batch_idx >= max_batches:
187
+ break
188
+
189
+ # make dataloader_idx arg in validation_step optional
190
+ if self.on_gpu:
191
+ batch = move_to_cuda(batch, self.root_gpu)
192
+ args = [batch, batch_idx]
193
+ if self.use_ddp:
194
+ output = task(*args)
195
+ else:
196
+ if test:
197
+ output = task_ref.test_step(*args)
198
+ else:
199
+ output = task_ref.validation_step(*args)
200
+ # track outputs for collation
201
+ outputs.append(output)
202
+ # give model a chance to do something with the outputs (and method defined)
203
+ if test:
204
+ eval_results = task_ref.test_end(outputs)
205
+ else:
206
+ eval_results = task_ref.validation_end(outputs)
207
+ # enable train mode again
208
+ task.train()
209
+ torch.set_grad_enabled(True)
210
+ return eval_results
211
+
212
+ ####################
213
+ # train
214
+ ####################
215
+ def train(self):
216
+ task_ref = self.get_task_ref()
217
+ task_ref.on_train_start()
218
+ if self.num_sanity_val_steps > 0:
219
+ # run tiny validation (if validation defined) to make sure program won't crash during val
220
+ self.evaluate(self.task, False, 'Sanity Val', max_batches=self.num_sanity_val_steps)
221
+ # clear cache before training
222
+ if self.on_gpu:
223
+ torch.cuda.empty_cache()
224
+ dataloader = task_ref.train_dataloader()
225
+ epoch = self.current_epoch
226
+ # run all epochs
227
+ while True:
228
+ # set seed for distributed sampler (enables shuffling for each epoch)
229
+ if self.use_ddp and hasattr(dataloader.sampler, 'set_epoch'):
230
+ dataloader.sampler.set_epoch(epoch)
231
+ # update training progress in trainer and model
232
+ task_ref.current_epoch = epoch
233
+ self.current_epoch = epoch
234
+ # total batches includes multiple val checks
235
+ self.batch_loss_value = 0 # accumulated grads
236
+ # before epoch hook
237
+ task_ref.on_epoch_start()
238
+
239
+ # run epoch
240
+ train_pbar = tqdm.tqdm(dataloader, initial=self.global_step, total=float('inf'),
241
+ dynamic_ncols=True, unit='step', disable=self.root_gpu > 0)
242
+ for batch_idx, batch in enumerate(train_pbar):
243
+ pbar_metrics, tb_metrics = self.run_training_batch(batch_idx, batch)
244
+ train_pbar.set_postfix(**pbar_metrics)
245
+ should_check_val = (self.global_step % self.val_check_interval == 0
246
+ and not self.fisrt_epoch)
247
+ if should_check_val:
248
+ self.run_evaluation()
249
+ self.fisrt_epoch = False
250
+ # when metrics should be logged
251
+ if (self.global_step + 1) % self.tb_log_interval == 0:
252
+ # logs user requested information to logger
253
+ self.log_metrics_to_tb(tb_metrics)
254
+
255
+ self.global_step += 1
256
+ task_ref.global_step = self.global_step
257
+ if self.global_step > self.max_updates:
258
+ print("| Training end..")
259
+ break
260
+ # epoch end hook
261
+ task_ref.on_epoch_end()
262
+ epoch += 1
263
+ if self.global_step > self.max_updates:
264
+ break
265
+ task_ref.on_train_end()
266
+
267
+ def run_training_batch(self, batch_idx, batch):
268
+ if batch is None:
269
+ return {}
270
+ all_progress_bar_metrics = []
271
+ all_log_metrics = []
272
+ task_ref = self.get_task_ref()
273
+ for opt_idx, optimizer in enumerate(self.optimizers):
274
+ if optimizer is None:
275
+ continue
276
+ # make sure only the gradients of the current optimizer's paramaters are calculated
277
+ # in the training step to prevent dangling gradients in multiple-optimizer setup.
278
+ if len(self.optimizers) > 1:
279
+ for param in task_ref.parameters():
280
+ param.requires_grad = False
281
+ for group in optimizer.param_groups:
282
+ for param in group['params']:
283
+ param.requires_grad = True
284
+
285
+ # forward pass
286
+ with autocast(enabled=self.amp):
287
+ if self.on_gpu:
288
+ batch = move_to_cuda(copy.copy(batch), self.root_gpu)
289
+ args = [batch, batch_idx, opt_idx]
290
+ if self.use_ddp:
291
+ output = self.task(*args)
292
+ else:
293
+ output = task_ref.training_step(*args)
294
+ loss = output['loss']
295
+ if loss is None:
296
+ continue
297
+ progress_bar_metrics = output['progress_bar']
298
+ log_metrics = output['tb_log']
299
+ # accumulate loss
300
+ loss = loss / self.accumulate_grad_batches
301
+
302
+ # backward pass
303
+ if loss.requires_grad:
304
+ if self.amp:
305
+ self.amp_scalar.scale(loss).backward()
306
+ else:
307
+ loss.backward()
308
+
309
+ # track progress bar metrics
310
+ all_log_metrics.append(log_metrics)
311
+ all_progress_bar_metrics.append(progress_bar_metrics)
312
+
313
+ if loss is None:
314
+ continue
315
+
316
+ # nan grads
317
+ if self.print_nan_grads:
318
+ has_nan_grad = False
319
+ for name, param in task_ref.named_parameters():
320
+ if (param.grad is not None) and torch.isnan(param.grad.float()).any():
321
+ print("| NaN params: ", name, param, param.grad)
322
+ has_nan_grad = True
323
+ if has_nan_grad:
324
+ exit(0)
325
+
326
+ # gradient update with accumulated gradients
327
+ if (self.global_step + 1) % self.accumulate_grad_batches == 0:
328
+ task_ref.on_before_optimization(opt_idx)
329
+ if self.amp:
330
+ self.amp_scalar.step(optimizer)
331
+ self.amp_scalar.update()
332
+ else:
333
+ optimizer.step()
334
+ optimizer.zero_grad()
335
+ task_ref.on_after_optimization(self.current_epoch, batch_idx, optimizer, opt_idx)
336
+
337
+ # collapse all metrics into one dict
338
+ all_progress_bar_metrics = {k: v for d in all_progress_bar_metrics for k, v in d.items()}
339
+ all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}
340
+ return all_progress_bar_metrics, all_log_metrics
341
+
342
+ ####################
343
+ # load and save checkpoint
344
+ ####################
345
+ def restore_weights(self, checkpoint):
346
+ # load model state
347
+ task_ref = self.get_task_ref()
348
+
349
+ if len([k for k in checkpoint['state_dict'].keys() if '.' in k]) > 0:
350
+ task_ref.load_state_dict(checkpoint['state_dict'])
351
+ else:
352
+ for k, v in checkpoint['state_dict'].items():
353
+ getattr(task_ref, k).load_state_dict(v)
354
+
355
+ if self.on_gpu:
356
+ task_ref.cuda(self.root_gpu)
357
+ # load training state (affects trainer only)
358
+ self.best_val_results = checkpoint['checkpoint_callback_best']
359
+ self.global_step = checkpoint['global_step']
360
+ self.current_epoch = checkpoint['epoch']
361
+ task_ref.global_step = self.global_step
362
+
363
+ # wait for all models to restore weights
364
+ if self.use_ddp:
365
+ # wait for all processes to catch up
366
+ dist.barrier()
367
+
368
+ def restore_opt_state(self, checkpoint):
369
+ if self.testing:
370
+ return
371
+ # restore the optimizers
372
+ optimizer_states = checkpoint['optimizer_states']
373
+ for optimizer, opt_state in zip(self.optimizers, optimizer_states):
374
+ if optimizer is None:
375
+ return
376
+ try:
377
+ optimizer.load_state_dict(opt_state)
378
+ # move optimizer to GPU 1 weight at a time
379
+ if self.on_gpu:
380
+ for state in optimizer.state.values():
381
+ for k, v in state.items():
382
+ if isinstance(v, torch.Tensor):
383
+ state[k] = v.cuda(self.root_gpu)
384
+ except ValueError:
385
+ print("| WARMING: optimizer parameters not match !!!")
386
+ try:
387
+ if dist.is_initialized() and dist.get_rank() > 0:
388
+ return
389
+ except Exception as e:
390
+ print(e)
391
+ return
392
+ did_restore = True
393
+ return did_restore
394
+
395
+ def save_checkpoint(self, epoch, logs=None):
396
+ monitor_op = np.less
397
+ ckpt_path = f'{self.work_dir}/model_ckpt_steps_{self.global_step}.ckpt'
398
+ logging.info(f'Epoch {epoch:05d}@{self.global_step}: saving model to {ckpt_path}')
399
+ self._atomic_save(ckpt_path)
400
+ for old_ckpt in get_all_ckpts(self.work_dir)[self.num_ckpt_keep:]:
401
+ subprocess.check_call(f'rm -rf "{old_ckpt}"', shell=True)
402
+ logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}')
403
+ current = None
404
+ if logs is not None and self.monitor_key in logs:
405
+ current = logs[self.monitor_key]
406
+ if current is not None and self.save_best:
407
+ if monitor_op(current, self.best_val_results):
408
+ best_filepath = f'{self.work_dir}/model_ckpt_best.pt'
409
+ self.best_val_results = current
410
+ logging.info(
411
+ f'Epoch {epoch:05d}@{self.global_step}: {self.monitor_key} reached {current:0.5f}. '
412
+ f'Saving model to {best_filepath}')
413
+ self._atomic_save(best_filepath)
414
+
415
+ def _atomic_save(self, filepath):
416
+ checkpoint = self.dump_checkpoint()
417
+ tmp_path = str(filepath) + ".part"
418
+ torch.save(checkpoint, tmp_path, _use_new_zipfile_serialization=False)
419
+ os.replace(tmp_path, filepath)
420
+
421
+ def dump_checkpoint(self):
422
+ checkpoint = {'epoch': self.current_epoch, 'global_step': self.global_step,
423
+ 'checkpoint_callback_best': self.best_val_results}
424
+ # save optimizers
425
+ optimizer_states = []
426
+ for i, optimizer in enumerate(self.optimizers):
427
+ if optimizer is not None:
428
+ optimizer_states.append(optimizer.state_dict())
429
+
430
+ checkpoint['optimizer_states'] = optimizer_states
431
+ task_ref = self.get_task_ref()
432
+ checkpoint['state_dict'] = {
433
+ k: v.state_dict() for k, v in task_ref.named_children() if len(list(v.parameters())) > 0}
434
+ return checkpoint
435
+
436
+ ####################
437
+ # DDP
438
+ ####################
439
+ def ddp_init(self, gpu_idx, task):
440
+ # determine which process we are and world size
441
+ self.proc_rank = gpu_idx
442
+ task.trainer = self
443
+ self.init_ddp_connection(self.proc_rank, self.num_gpus)
444
+
445
+ # copy model to each gpu
446
+ torch.cuda.set_device(gpu_idx)
447
+ # override root GPU
448
+ self.root_gpu = gpu_idx
449
+ self.task = task
450
+
451
+ def configure_ddp(self, task):
452
+ task = DDP(task, device_ids=[self.root_gpu], find_unused_parameters=True)
453
+ if dist.get_rank() != 0 and not self.debug:
454
+ sys.stdout = open(os.devnull, "w")
455
+ sys.stderr = open(os.devnull, "w")
456
+ random.seed(self.seed)
457
+ np.random.seed(self.seed)
458
+ return task
459
+
460
+ def init_ddp_connection(self, proc_rank, world_size):
461
+ root_node = '127.0.0.1'
462
+ root_node = self.resolve_root_node_address(root_node)
463
+ os.environ['MASTER_ADDR'] = root_node
464
+ dist.init_process_group('nccl', rank=proc_rank, world_size=world_size)
465
+
466
+ def resolve_root_node_address(self, root_node):
467
+ if '[' in root_node:
468
+ name = root_node.split('[')[0]
469
+ number = root_node.split(',')[0]
470
+ if '-' in number:
471
+ number = number.split('-')[0]
472
+ number = re.sub('[^0-9]', '', number)
473
+ root_node = name + number
474
+ return root_node
475
+
476
+ ####################
477
+ # utils
478
+ ####################
479
+ def get_task_ref(self):
480
+ from tasks.base_task import BaseTask
481
+ task: BaseTask = self.task.module if isinstance(self.task, DDP) else self.task
482
+ return task
483
+
484
+ def log_metrics_to_tb(self, metrics, step=None):
485
+ """Logs the metric dict passed in.
486
+
487
+ :param metrics:
488
+ """
489
+ # added metrics by Lightning for convenience
490
+ metrics['epoch'] = self.current_epoch
491
+
492
+ # turn all tensors to scalars
493
+ scalar_metrics = self.metrics_to_scalars(metrics)
494
+
495
+ step = step if step is not None else self.global_step
496
+ # log actual metrics
497
+ if self.proc_rank == 0:
498
+ self.log_metrics(self.logger, scalar_metrics, step=step)
499
+
500
+ @staticmethod
501
+ def log_metrics(logger, metrics, step=None):
502
+ for k, v in metrics.items():
503
+ if isinstance(v, torch.Tensor):
504
+ v = v.item()
505
+ logger.add_scalar(k, v, step)
506
+
507
+ def metrics_to_scalars(self, metrics):
508
+ new_metrics = {}
509
+ for k, v in metrics.items():
510
+ if isinstance(v, torch.Tensor):
511
+ v = v.item()
512
+
513
+ if type(v) is dict:
514
+ v = self.metrics_to_scalars(v)
515
+
516
+ new_metrics[k] = v
517
+
518
+ return new_metrics
utils/training_utils.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from utils.hparams import hparams
2
+
3
+
4
+ class RSQRTSchedule(object):
5
+ def __init__(self, optimizer):
6
+ super().__init__()
7
+ self.optimizer = optimizer
8
+ self.constant_lr = hparams['lr']
9
+ self.warmup_updates = hparams['warmup_updates']
10
+ self.hidden_size = hparams['hidden_size']
11
+ self.lr = hparams['lr']
12
+ for param_group in optimizer.param_groups:
13
+ param_group['lr'] = self.lr
14
+ self.step(0)
15
+
16
+ def step(self, num_updates):
17
+ constant_lr = self.constant_lr
18
+ warmup = min(num_updates / self.warmup_updates, 1.0)
19
+ rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5
20
+ rsqrt_hidden = self.hidden_size ** -0.5
21
+ self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7)
22
+ for param_group in self.optimizer.param_groups:
23
+ param_group['lr'] = self.lr
24
+ return self.lr
25
+
26
+ def get_lr(self):
27
+ return self.optimizer.param_groups[0]['lr']
utils/tts_utils.py ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import defaultdict
2
+ import torch
3
+ import torch.nn.functional as F
4
+
5
+
6
+ def make_positions(tensor, padding_idx):
7
+ """Replace non-padding symbols with their position numbers.
8
+
9
+ Position numbers begin at padding_idx+1. Padding symbols are ignored.
10
+ """
11
+ # The series of casts and type-conversions here are carefully
12
+ # balanced to both work with ONNX export and XLA. In particular XLA
13
+ # prefers ints, cumsum defaults to output longs, and ONNX doesn't know
14
+ # how to handle the dtype kwarg in cumsum.
15
+ mask = tensor.ne(padding_idx).int()
16
+ return (
17
+ torch.cumsum(mask, dim=1).type_as(mask) * mask
18
+ ).long() + padding_idx
19
+
20
+
21
+ def softmax(x, dim):
22
+ return F.softmax(x, dim=dim, dtype=torch.float32)
23
+
24
+
25
+ def sequence_mask(lengths, maxlen, dtype=torch.bool):
26
+ if maxlen is None:
27
+ maxlen = lengths.max()
28
+ mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t()
29
+ mask.type(dtype)
30
+ return mask
31
+
32
+
33
+ INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0)
34
+
35
+
36
+ def _get_full_incremental_state_key(module_instance, key):
37
+ module_name = module_instance.__class__.__name__
38
+
39
+ # assign a unique ID to each module instance, so that incremental state is
40
+ # not shared across module instances
41
+ if not hasattr(module_instance, '_instance_id'):
42
+ INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1
43
+ module_instance._instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name]
44
+
45
+ return '{}.{}.{}'.format(module_name, module_instance._instance_id, key)
46
+
47
+
48
+ def get_incremental_state(module, incremental_state, key):
49
+ """Helper for getting incremental state for an nn.Module."""
50
+ full_key = _get_full_incremental_state_key(module, key)
51
+ if incremental_state is None or full_key not in incremental_state:
52
+ return None
53
+ return incremental_state[full_key]
54
+
55
+
56
+ def set_incremental_state(module, incremental_state, key, value):
57
+ """Helper for setting incremental state for an nn.Module."""
58
+ if incremental_state is not None:
59
+ full_key = _get_full_incremental_state_key(module, key)
60
+ incremental_state[full_key] = value
61
+
62
+
63
+ def fill_with_neg_inf(t):
64
+ """FP16-compatible function that fills a tensor with -inf."""
65
+ return t.float().fill_(float('-inf')).type_as(t)
66
+
67
+
68
+ def fill_with_neg_inf2(t):
69
+ """FP16-compatible function that fills a tensor with -inf."""
70
+ return t.float().fill_(-1e8).type_as(t)
71
+
72
+
73
+ def get_focus_rate(attn, src_padding_mask=None, tgt_padding_mask=None):
74
+ '''
75
+ attn: bs x L_t x L_s
76
+ '''
77
+ if src_padding_mask is not None:
78
+ attn = attn * (1 - src_padding_mask.float())[:, None, :]
79
+
80
+ if tgt_padding_mask is not None:
81
+ attn = attn * (1 - tgt_padding_mask.float())[:, :, None]
82
+
83
+ focus_rate = attn.max(-1).values.sum(-1)
84
+ focus_rate = focus_rate / attn.sum(-1).sum(-1)
85
+ return focus_rate
86
+
87
+
88
+ def get_phone_coverage_rate(attn, src_padding_mask=None, src_seg_mask=None, tgt_padding_mask=None):
89
+ '''
90
+ attn: bs x L_t x L_s
91
+ '''
92
+ src_mask = attn.new(attn.size(0), attn.size(-1)).bool().fill_(False)
93
+ if src_padding_mask is not None:
94
+ src_mask |= src_padding_mask
95
+ if src_seg_mask is not None:
96
+ src_mask |= src_seg_mask
97
+
98
+ attn = attn * (1 - src_mask.float())[:, None, :]
99
+ if tgt_padding_mask is not None:
100
+ attn = attn * (1 - tgt_padding_mask.float())[:, :, None]
101
+
102
+ phone_coverage_rate = attn.max(1).values.sum(-1)
103
+ # phone_coverage_rate = phone_coverage_rate / attn.sum(-1).sum(-1)
104
+ phone_coverage_rate = phone_coverage_rate / (1 - src_mask.float()).sum(-1)
105
+ return phone_coverage_rate
106
+
107
+
108
+ def get_diagonal_focus_rate(attn, attn_ks, target_len, src_padding_mask=None, tgt_padding_mask=None,
109
+ band_mask_factor=5, band_width=50):
110
+ '''
111
+ attn: bx x L_t x L_s
112
+ attn_ks: shape: tensor with shape [batch_size], input_lens/output_lens
113
+
114
+ diagonal: y=k*x (k=attn_ks, x:output, y:input)
115
+ 1 0 0
116
+ 0 1 0
117
+ 0 0 1
118
+ y>=k*(x-width) and y<=k*(x+width):1
119
+ else:0
120
+ '''
121
+ # width = min(target_len/band_mask_factor, 50)
122
+ width1 = target_len / band_mask_factor
123
+ width2 = target_len.new(target_len.size()).fill_(band_width)
124
+ width = torch.where(width1 < width2, width1, width2).float()
125
+ base = torch.ones(attn.size()).to(attn.device)
126
+ zero = torch.zeros(attn.size()).to(attn.device)
127
+ x = torch.arange(0, attn.size(1)).to(attn.device)[None, :, None].float() * base
128
+ y = torch.arange(0, attn.size(2)).to(attn.device)[None, None, :].float() * base
129
+ cond = (y - attn_ks[:, None, None] * x)
130
+ cond1 = cond + attn_ks[:, None, None] * width[:, None, None]
131
+ cond2 = cond - attn_ks[:, None, None] * width[:, None, None]
132
+ mask1 = torch.where(cond1 < 0, zero, base)
133
+ mask2 = torch.where(cond2 > 0, zero, base)
134
+ mask = mask1 * mask2
135
+
136
+ if src_padding_mask is not None:
137
+ attn = attn * (1 - src_padding_mask.float())[:, None, :]
138
+ if tgt_padding_mask is not None:
139
+ attn = attn * (1 - tgt_padding_mask.float())[:, :, None]
140
+
141
+ diagonal_attn = attn * mask
142
+ diagonal_focus_rate = diagonal_attn.sum(-1).sum(-1) / attn.sum(-1).sum(-1)
143
+ return diagonal_focus_rate, mask
144
+
145
+
146
+ def select_attn(attn_logits, type='best'):
147
+ """
148
+
149
+ :param attn_logits: [n_layers, B, n_head, T_sp, T_txt]
150
+ :return:
151
+ """
152
+ encdec_attn = torch.stack(attn_logits, 0).transpose(1, 2)
153
+ # [n_layers * n_head, B, T_sp, T_txt]
154
+ encdec_attn = (encdec_attn.reshape([-1, *encdec_attn.shape[2:]])).softmax(-1)
155
+ if type == 'best':
156
+ indices = encdec_attn.max(-1).values.sum(-1).argmax(0)
157
+ encdec_attn = encdec_attn.gather(
158
+ 0, indices[None, :, None, None].repeat(1, 1, encdec_attn.size(-2), encdec_attn.size(-1)))[0]
159
+ return encdec_attn
160
+ elif type == 'mean':
161
+ return encdec_attn.mean(0)
162
+
163
+
164
+ def make_pad_mask(lengths, xs=None, length_dim=-1):
165
+ """Make mask tensor containing indices of padded part.
166
+ Args:
167
+ lengths (LongTensor or List): Batch of lengths (B,).
168
+ xs (Tensor, optional): The reference tensor.
169
+ If set, masks will be the same shape as this tensor.
170
+ length_dim (int, optional): Dimension indicator of the above tensor.
171
+ See the example.
172
+ Returns:
173
+ Tensor: Mask tensor containing indices of padded part.
174
+ dtype=torch.uint8 in PyTorch 1.2-
175
+ dtype=torch.bool in PyTorch 1.2+ (including 1.2)
176
+ Examples:
177
+ With only lengths.
178
+ >>> lengths = [5, 3, 2]
179
+ >>> make_non_pad_mask(lengths)
180
+ masks = [[0, 0, 0, 0 ,0],
181
+ [0, 0, 0, 1, 1],
182
+ [0, 0, 1, 1, 1]]
183
+ With the reference tensor.
184
+ >>> xs = torch.zeros((3, 2, 4))
185
+ >>> make_pad_mask(lengths, xs)
186
+ tensor([[[0, 0, 0, 0],
187
+ [0, 0, 0, 0]],
188
+ [[0, 0, 0, 1],
189
+ [0, 0, 0, 1]],
190
+ [[0, 0, 1, 1],
191
+ [0, 0, 1, 1]]], dtype=torch.uint8)
192
+ >>> xs = torch.zeros((3, 2, 6))
193
+ >>> make_pad_mask(lengths, xs)
194
+ tensor([[[0, 0, 0, 0, 0, 1],
195
+ [0, 0, 0, 0, 0, 1]],
196
+ [[0, 0, 0, 1, 1, 1],
197
+ [0, 0, 0, 1, 1, 1]],
198
+ [[0, 0, 1, 1, 1, 1],
199
+ [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
200
+ With the reference tensor and dimension indicator.
201
+ >>> xs = torch.zeros((3, 6, 6))
202
+ >>> make_pad_mask(lengths, xs, 1)
203
+ tensor([[[0, 0, 0, 0, 0, 0],
204
+ [0, 0, 0, 0, 0, 0],
205
+ [0, 0, 0, 0, 0, 0],
206
+ [0, 0, 0, 0, 0, 0],
207
+ [0, 0, 0, 0, 0, 0],
208
+ [1, 1, 1, 1, 1, 1]],
209
+ [[0, 0, 0, 0, 0, 0],
210
+ [0, 0, 0, 0, 0, 0],
211
+ [0, 0, 0, 0, 0, 0],
212
+ [1, 1, 1, 1, 1, 1],
213
+ [1, 1, 1, 1, 1, 1],
214
+ [1, 1, 1, 1, 1, 1]],
215
+ [[0, 0, 0, 0, 0, 0],
216
+ [0, 0, 0, 0, 0, 0],
217
+ [1, 1, 1, 1, 1, 1],
218
+ [1, 1, 1, 1, 1, 1],
219
+ [1, 1, 1, 1, 1, 1],
220
+ [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8)
221
+ >>> make_pad_mask(lengths, xs, 2)
222
+ tensor([[[0, 0, 0, 0, 0, 1],
223
+ [0, 0, 0, 0, 0, 1],
224
+ [0, 0, 0, 0, 0, 1],
225
+ [0, 0, 0, 0, 0, 1],
226
+ [0, 0, 0, 0, 0, 1],
227
+ [0, 0, 0, 0, 0, 1]],
228
+ [[0, 0, 0, 1, 1, 1],
229
+ [0, 0, 0, 1, 1, 1],
230
+ [0, 0, 0, 1, 1, 1],
231
+ [0, 0, 0, 1, 1, 1],
232
+ [0, 0, 0, 1, 1, 1],
233
+ [0, 0, 0, 1, 1, 1]],
234
+ [[0, 0, 1, 1, 1, 1],
235
+ [0, 0, 1, 1, 1, 1],
236
+ [0, 0, 1, 1, 1, 1],
237
+ [0, 0, 1, 1, 1, 1],
238
+ [0, 0, 1, 1, 1, 1],
239
+ [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8)
240
+ """
241
+ if length_dim == 0:
242
+ raise ValueError("length_dim cannot be 0: {}".format(length_dim))
243
+
244
+ if not isinstance(lengths, list):
245
+ lengths = lengths.tolist()
246
+ bs = int(len(lengths))
247
+ if xs is None:
248
+ maxlen = int(max(lengths))
249
+ else:
250
+ maxlen = xs.size(length_dim)
251
+
252
+ seq_range = torch.arange(0, maxlen, dtype=torch.int64)
253
+ seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen)
254
+ seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1)
255
+ mask = seq_range_expand >= seq_length_expand
256
+
257
+ if xs is not None:
258
+ assert xs.size(0) == bs, (xs.size(0), bs)
259
+
260
+ if length_dim < 0:
261
+ length_dim = xs.dim() + length_dim
262
+ # ind = (:, None, ..., None, :, , None, ..., None)
263
+ ind = tuple(
264
+ slice(None) if i in (0, length_dim) else None for i in range(xs.dim())
265
+ )
266
+ mask = mask[ind].expand_as(xs).to(xs.device)
267
+ return mask
268
+
269
+
270
+ def make_non_pad_mask(lengths, xs=None, length_dim=-1):
271
+ """Make mask tensor containing indices of non-padded part.
272
+ Args:
273
+ lengths (LongTensor or List): Batch of lengths (B,).
274
+ xs (Tensor, optional): The reference tensor.
275
+ If set, masks will be the same shape as this tensor.
276
+ length_dim (int, optional): Dimension indicator of the above tensor.
277
+ See the example.
278
+ Returns:
279
+ ByteTensor: mask tensor containing indices of padded part.
280
+ dtype=torch.uint8 in PyTorch 1.2-
281
+ dtype=torch.bool in PyTorch 1.2+ (including 1.2)
282
+ Examples:
283
+ With only lengths.
284
+ >>> lengths = [5, 3, 2]
285
+ >>> make_non_pad_mask(lengths)
286
+ masks = [[1, 1, 1, 1 ,1],
287
+ [1, 1, 1, 0, 0],
288
+ [1, 1, 0, 0, 0]]
289
+ With the reference tensor.
290
+ >>> xs = torch.zeros((3, 2, 4))
291
+ >>> make_non_pad_mask(lengths, xs)
292
+ tensor([[[1, 1, 1, 1],
293
+ [1, 1, 1, 1]],
294
+ [[1, 1, 1, 0],
295
+ [1, 1, 1, 0]],
296
+ [[1, 1, 0, 0],
297
+ [1, 1, 0, 0]]], dtype=torch.uint8)
298
+ >>> xs = torch.zeros((3, 2, 6))
299
+ >>> make_non_pad_mask(lengths, xs)
300
+ tensor([[[1, 1, 1, 1, 1, 0],
301
+ [1, 1, 1, 1, 1, 0]],
302
+ [[1, 1, 1, 0, 0, 0],
303
+ [1, 1, 1, 0, 0, 0]],
304
+ [[1, 1, 0, 0, 0, 0],
305
+ [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
306
+ With the reference tensor and dimension indicator.
307
+ >>> xs = torch.zeros((3, 6, 6))
308
+ >>> make_non_pad_mask(lengths, xs, 1)
309
+ tensor([[[1, 1, 1, 1, 1, 1],
310
+ [1, 1, 1, 1, 1, 1],
311
+ [1, 1, 1, 1, 1, 1],
312
+ [1, 1, 1, 1, 1, 1],
313
+ [1, 1, 1, 1, 1, 1],
314
+ [0, 0, 0, 0, 0, 0]],
315
+ [[1, 1, 1, 1, 1, 1],
316
+ [1, 1, 1, 1, 1, 1],
317
+ [1, 1, 1, 1, 1, 1],
318
+ [0, 0, 0, 0, 0, 0],
319
+ [0, 0, 0, 0, 0, 0],
320
+ [0, 0, 0, 0, 0, 0]],
321
+ [[1, 1, 1, 1, 1, 1],
322
+ [1, 1, 1, 1, 1, 1],
323
+ [0, 0, 0, 0, 0, 0],
324
+ [0, 0, 0, 0, 0, 0],
325
+ [0, 0, 0, 0, 0, 0],
326
+ [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8)
327
+ >>> make_non_pad_mask(lengths, xs, 2)
328
+ tensor([[[1, 1, 1, 1, 1, 0],
329
+ [1, 1, 1, 1, 1, 0],
330
+ [1, 1, 1, 1, 1, 0],
331
+ [1, 1, 1, 1, 1, 0],
332
+ [1, 1, 1, 1, 1, 0],
333
+ [1, 1, 1, 1, 1, 0]],
334
+ [[1, 1, 1, 0, 0, 0],
335
+ [1, 1, 1, 0, 0, 0],
336
+ [1, 1, 1, 0, 0, 0],
337
+ [1, 1, 1, 0, 0, 0],
338
+ [1, 1, 1, 0, 0, 0],
339
+ [1, 1, 1, 0, 0, 0]],
340
+ [[1, 1, 0, 0, 0, 0],
341
+ [1, 1, 0, 0, 0, 0],
342
+ [1, 1, 0, 0, 0, 0],
343
+ [1, 1, 0, 0, 0, 0],
344
+ [1, 1, 0, 0, 0, 0],
345
+ [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8)
346
+ """
347
+ return ~make_pad_mask(lengths, xs, length_dim)
348
+
349
+
350
+ def get_mask_from_lengths(lengths):
351
+ max_len = torch.max(lengths).item()
352
+ ids = torch.arange(0, max_len).to(lengths.device)
353
+ mask = (ids < lengths.unsqueeze(1)).bool()
354
+ return mask
355
+
356
+
357
+ def group_hidden_by_segs(h, seg_ids, max_len):
358
+ """
359
+
360
+ :param h: [B, T, H]
361
+ :param seg_ids: [B, T]
362
+ :return: h_ph: [B, T_ph, H]
363
+ """
364
+ B, T, H = h.shape
365
+ h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h)
366
+ all_ones = h.new_ones(h.shape[:2])
367
+ cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous()
368
+ h_gby_segs = h_gby_segs[:, 1:]
369
+ cnt_gby_segs = cnt_gby_segs[:, 1:]
370
+ h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1)
371
+ return h_gby_segs, cnt_gby_segs
vocoders/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from vocoders import hifigan
2
+ from vocoders import fastdiff
vocoders/base_vocoder.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ VOCODERS = {}
3
+
4
+
5
+ def register_vocoder(cls):
6
+ VOCODERS[cls.__name__.lower()] = cls
7
+ VOCODERS[cls.__name__] = cls
8
+ return cls
9
+
10
+
11
+ def get_vocoder_cls(hparams):
12
+ if hparams['vocoder'] in VOCODERS:
13
+ return VOCODERS[hparams['vocoder']]
14
+ else:
15
+ vocoder_cls = hparams['vocoder']
16
+ pkg = ".".join(vocoder_cls.split(".")[:-1])
17
+ cls_name = vocoder_cls.split(".")[-1]
18
+ vocoder_cls = getattr(importlib.import_module(pkg), cls_name)
19
+ return vocoder_cls
20
+
21
+
22
+ class BaseVocoder:
23
+ def spec2wav(self, mel):
24
+ """
25
+
26
+ :param mel: [T, 80]
27
+ :return: wav: [T']
28
+ """
29
+
30
+ raise NotImplementedError
31
+
32
+ @staticmethod
33
+ def wav2spec(wav_fn):
34
+ """
35
+
36
+ :param wav_fn: str
37
+ :return: wav, mel: [T, 80]
38
+ """
39
+ raise NotImplementedError
vocoders/fastdiff.py ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import re
3
+ import librosa
4
+ import torch
5
+ import yaml
6
+ from sklearn.preprocessing import StandardScaler
7
+ from torch import nn
8
+ from modules.FastDiff.module.FastDiff_model import FastDiff as FastDiff_model
9
+ from utils.hparams import hparams
10
+ from modules.parallel_wavegan.utils import read_hdf5
11
+ from vocoders.base_vocoder import BaseVocoder, register_vocoder
12
+ import numpy as np
13
+ from modules.FastDiff.module.util import theta_timestep_loss, compute_hyperparams_given_schedule, sampling_given_noise_schedule
14
+
15
+ def load_fastdiff_model(config_path, checkpoint_path):
16
+ # load config
17
+ with open(config_path) as f:
18
+ config = yaml.load(f, Loader=yaml.Loader)
19
+
20
+ # setup
21
+ if torch.cuda.is_available():
22
+ device = torch.device("cuda")
23
+ else:
24
+ device = torch.device("cpu")
25
+ model = FastDiff_model(audio_channels=config['audio_channels'],
26
+ inner_channels=config['inner_channels'],
27
+ cond_channels=config['cond_channels'],
28
+ upsample_ratios=config['upsample_ratios'],
29
+ lvc_layers_each_block=config['lvc_layers_each_block'],
30
+ lvc_kernel_size=config['lvc_kernel_size'],
31
+ kpnet_hidden_channels=config['kpnet_hidden_channels'],
32
+ kpnet_conv_size=config['kpnet_conv_size'],
33
+ dropout=config['dropout'],
34
+ diffusion_step_embed_dim_in=config['diffusion_step_embed_dim_in'],
35
+ diffusion_step_embed_dim_mid=config['diffusion_step_embed_dim_mid'],
36
+ diffusion_step_embed_dim_out=config['diffusion_step_embed_dim_out'],
37
+ use_weight_norm=config['use_weight_norm'])
38
+
39
+ model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"]["model"], strict=True)
40
+
41
+ # Init hyperparameters by linear schedule
42
+ noise_schedule = torch.linspace(float(config["beta_0"]), float(config["beta_T"]), int(config["T"])).cuda()
43
+ diffusion_hyperparams = compute_hyperparams_given_schedule(noise_schedule)
44
+
45
+ # map diffusion hyperparameters to gpu
46
+ for key in diffusion_hyperparams:
47
+ if key in ["beta", "alpha", "sigma"]:
48
+ diffusion_hyperparams[key] = diffusion_hyperparams[key].cuda()
49
+ diffusion_hyperparams = diffusion_hyperparams
50
+
51
+
52
+ if config['noise_schedule'] != '':
53
+ noise_schedule = config['noise_schedule']
54
+ if isinstance(noise_schedule, list):
55
+ noise_schedule = torch.FloatTensor(noise_schedule).cuda()
56
+ else:
57
+ # Select Schedule
58
+ try:
59
+ reverse_step = int(hparams.get('N'))
60
+ except:
61
+ print('Please specify $N (the number of revere iterations) in config file. Now denoise with 4 iterations.')
62
+ reverse_step = 4
63
+ if reverse_step == 1000:
64
+ noise_schedule = torch.linspace(0.000001, 0.01, 1000).cuda()
65
+ elif reverse_step == 200:
66
+ noise_schedule = torch.linspace(0.0001, 0.02, 200).cuda()
67
+
68
+ # Below are schedules derived by Noise Predictor
69
+ elif reverse_step == 8:
70
+ noise_schedule = [6.689325005027058e-07, 1.0033881153503899e-05, 0.00015496854030061513,
71
+ 0.002387222135439515, 0.035597629845142365, 0.3681158423423767, 0.4735414385795593, 0.5]
72
+ elif reverse_step == 6:
73
+ noise_schedule = [1.7838445955931093e-06, 2.7984189728158526e-05, 0.00043231004383414984,
74
+ 0.006634317338466644, 0.09357017278671265, 0.6000000238418579]
75
+ elif reverse_step == 4:
76
+ noise_schedule = [3.2176e-04, 2.5743e-03, 2.5376e-02, 7.0414e-01]
77
+ elif reverse_step == 3:
78
+ noise_schedule = [9.0000e-05, 9.0000e-03, 6.0000e-01]
79
+ else:
80
+ raise NotImplementedError
81
+
82
+ if isinstance(noise_schedule, list):
83
+ noise_schedule = torch.FloatTensor(noise_schedule).cuda()
84
+
85
+ model.remove_weight_norm()
86
+ model = model.eval().to(device)
87
+ print(f"| Loaded model parameters from {checkpoint_path}.")
88
+ print(f"| FastDiff device: {device}.")
89
+ return model, diffusion_hyperparams, noise_schedule, config, device
90
+
91
+
92
+ @register_vocoder
93
+ class FastDiff(BaseVocoder):
94
+ def __init__(self):
95
+ if hparams['vocoder_ckpt'] == '': # load LJSpeech FastDiff pretrained model
96
+ base_dir = 'checkpoint/FastDiff'
97
+ config_path = f'{base_dir}/config.yaml'
98
+ ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
99
+ lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
100
+ print('| load FastDiff: ', ckpt)
101
+ self.scaler = None
102
+ self.model, self.dh, self.noise_schedule, self.config, self.device = load_fastdiff_model(
103
+ config_path=config_path,
104
+ checkpoint_path=ckpt,
105
+ )
106
+ else:
107
+ base_dir = hparams['vocoder_ckpt']
108
+ print(base_dir)
109
+ config_path = f'{base_dir}/config.yaml'
110
+ ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
111
+ lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
112
+ print('| load FastDiff: ', ckpt)
113
+ self.scaler = None
114
+ self.model, self.dh, self.noise_schedule, self.config, self.device = load_fastdiff_model(
115
+ config_path=config_path,
116
+ checkpoint_path=ckpt,
117
+ )
118
+
119
+ def spec2wav(self, mel, **kwargs):
120
+ # start generation
121
+ device = self.device
122
+ with torch.no_grad():
123
+ c = torch.FloatTensor(mel).unsqueeze(0).transpose(2, 1).to(device)
124
+ audio_length = c.shape[-1] * hparams["hop_size"]
125
+ y = sampling_given_noise_schedule(
126
+ self.model, (1, 1, audio_length), self.dh, self.noise_schedule, condition=c, ddim=False, return_sequence=False)
127
+ wav_out = y.cpu().numpy()
128
+ return wav_out
129
+
130
+ @staticmethod
131
+ def wav2spec(wav_fn, return_linear=False):
132
+ from data_gen.tts.data_gen_utils import process_utterance
133
+ res = process_utterance(
134
+ wav_fn, fft_size=hparams['fft_size'],
135
+ hop_size=hparams['hop_size'],
136
+ win_length=hparams['win_size'],
137
+ num_mels=hparams['audio_num_mel_bins'],
138
+ fmin=hparams['fmin'],
139
+ fmax=hparams['fmax'],
140
+ sample_rate=hparams['audio_sample_rate'],
141
+ loud_norm=hparams['loud_norm'],
142
+ min_level_db=hparams['min_level_db'],
143
+ return_linear=return_linear, vocoder='fastdiff', eps=float(hparams.get('wav2spec_eps', 1e-10)))
144
+ if return_linear:
145
+ return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft]
146
+ else:
147
+ return res[0], res[1].T
148
+
149
+ @staticmethod
150
+ def wav2mfcc(wav_fn):
151
+ fft_size = hparams['fft_size']
152
+ hop_size = hparams['hop_size']
153
+ win_length = hparams['win_size']
154
+ sample_rate = hparams['audio_sample_rate']
155
+ wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
156
+ mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
157
+ n_fft=fft_size, hop_length=hop_size,
158
+ win_length=win_length, pad_mode="constant", power=1.0)
159
+ mfcc_delta = librosa.feature.delta(mfcc, order=1)
160
+ mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
161
+ mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
162
+ return mfcc
vocoders/hifigan.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import json
3
+ import os
4
+ import re
5
+
6
+ import librosa
7
+ import torch
8
+
9
+ import utils
10
+ from modules.hifigan.hifigan import HifiGanGenerator
11
+ from utils.hparams import hparams, set_hparams
12
+ from vocoders.base_vocoder import register_vocoder
13
+ from vocoders.pwg import PWG
14
+ from vocoders.vocoder_utils import denoise
15
+
16
+
17
+ def load_model(config_path, checkpoint_path):
18
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
19
+ ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
20
+ if '.yaml' in config_path:
21
+ config = set_hparams(config_path, global_hparams=False)
22
+ state = ckpt_dict["state_dict"]["model_gen"]
23
+ elif '.json' in config_path:
24
+ config = json.load(open(config_path, 'r'))
25
+ state = ckpt_dict["generator"]
26
+
27
+ model = HifiGanGenerator(config)
28
+ model.load_state_dict(state, strict=True)
29
+ model.remove_weight_norm()
30
+ model = model.eval().to(device)
31
+ print(f"| Loaded model parameters from {checkpoint_path}.")
32
+ print(f"| HifiGAN device: {device}.")
33
+ return model, config, device
34
+
35
+
36
+ total_time = 0
37
+
38
+
39
+ @register_vocoder
40
+ class HifiGAN(PWG):
41
+ def __init__(self):
42
+ base_dir = hparams['vocoder_ckpt']
43
+ config_path = f'{base_dir}/config.yaml'
44
+ if os.path.exists(config_path):
45
+ ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
46
+ lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
47
+ print('| load HifiGAN: ', ckpt)
48
+ self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt)
49
+ else:
50
+ config_path = f'{base_dir}/config.json'
51
+ ckpt = f'{base_dir}/generator_v1'
52
+ if os.path.exists(config_path):
53
+ self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt)
54
+
55
+ def spec2wav(self, mel, **kwargs):
56
+ device = self.device
57
+ with torch.no_grad():
58
+ c = torch.FloatTensor(mel).unsqueeze(0).transpose(2, 1).to(device)
59
+ with utils.Timer('hifigan', print_time=hparams['profile_infer']):
60
+ f0 = kwargs.get('f0')
61
+ if f0 is not None and hparams.get('use_nsf'):
62
+ f0 = torch.FloatTensor(f0[None, :]).to(device)
63
+ y = self.model(c, f0).view(-1)
64
+ else:
65
+ y = self.model(c).view(-1)
66
+ wav_out = y.cpu().numpy()
67
+ if hparams.get('vocoder_denoise_c', 0.0) > 0:
68
+ wav_out = denoise(wav_out, v=hparams['vocoder_denoise_c'])
69
+ return wav_out
70
+
71
+ # @staticmethod
72
+ # def wav2spec(wav_fn, **kwargs):
73
+ # wav, _ = librosa.core.load(wav_fn, sr=hparams['audio_sample_rate'])
74
+ # wav_torch = torch.FloatTensor(wav)[None, :]
75
+ # mel = mel_spectrogram(wav_torch, hparams).numpy()[0]
76
+ # return wav, mel.T
vocoders/pwg.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import re
3
+ import librosa
4
+ import torch
5
+ import yaml
6
+ from sklearn.preprocessing import StandardScaler
7
+ from torch import nn
8
+ from modules.parallel_wavegan.models import ParallelWaveGANGenerator
9
+ from modules.parallel_wavegan.utils import read_hdf5
10
+ from utils.hparams import hparams
11
+ from utils.pitch_utils import f0_to_coarse
12
+ from vocoders.base_vocoder import BaseVocoder, register_vocoder
13
+ import numpy as np
14
+
15
+
16
+ def load_pwg_model(config_path, checkpoint_path, stats_path):
17
+ # load config
18
+ with open(config_path) as f:
19
+ config = yaml.load(f, Loader=yaml.Loader)
20
+
21
+ # setup
22
+ if torch.cuda.is_available():
23
+ device = torch.device("cuda")
24
+ else:
25
+ device = torch.device("cpu")
26
+ model = ParallelWaveGANGenerator(**config["generator_params"])
27
+
28
+ ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
29
+ if 'state_dict' not in ckpt_dict: # official vocoder
30
+ model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["model"]["generator"])
31
+ scaler = StandardScaler()
32
+ if config["format"] == "hdf5":
33
+ scaler.mean_ = read_hdf5(stats_path, "mean")
34
+ scaler.scale_ = read_hdf5(stats_path, "scale")
35
+ elif config["format"] == "npy":
36
+ scaler.mean_ = np.load(stats_path)[0]
37
+ scaler.scale_ = np.load(stats_path)[1]
38
+ else:
39
+ raise ValueError("support only hdf5 or npy format.")
40
+ else: # custom PWG vocoder
41
+ fake_task = nn.Module()
42
+ fake_task.model_gen = model
43
+ fake_task.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"], strict=False)
44
+ scaler = None
45
+
46
+ model.remove_weight_norm()
47
+ model = model.eval().to(device)
48
+ print(f"| Loaded model parameters from {checkpoint_path}.")
49
+ print(f"| PWG device: {device}.")
50
+ return model, scaler, config, device
51
+
52
+
53
+ @register_vocoder
54
+ class PWG(BaseVocoder):
55
+ def __init__(self):
56
+ if hparams['vocoder_ckpt'] == '': # load LJSpeech PWG pretrained model
57
+ base_dir = 'wavegan_pretrained'
58
+ ckpts = glob.glob(f'{base_dir}/checkpoint-*steps.pkl')
59
+ ckpt = sorted(ckpts, key=
60
+ lambda x: int(re.findall(f'{base_dir}/checkpoint-(\d+)steps.pkl', x)[0]))[-1]
61
+ config_path = f'{base_dir}/config.yaml'
62
+ print('| load PWG: ', ckpt)
63
+ self.model, self.scaler, self.config, self.device = load_pwg_model(
64
+ config_path=config_path,
65
+ checkpoint_path=ckpt,
66
+ stats_path=f'{base_dir}/stats.h5',
67
+ )
68
+ else:
69
+ base_dir = hparams['vocoder_ckpt']
70
+ print(base_dir)
71
+ config_path = f'{base_dir}/config.yaml'
72
+ ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
73
+ lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
74
+ print('| load PWG: ', ckpt)
75
+ self.scaler = None
76
+ self.model, _, self.config, self.device = load_pwg_model(
77
+ config_path=config_path,
78
+ checkpoint_path=ckpt,
79
+ stats_path=f'{base_dir}/stats.h5',
80
+ )
81
+
82
+ def spec2wav(self, mel, **kwargs):
83
+ # start generation
84
+ config = self.config
85
+ device = self.device
86
+ pad_size = (config["generator_params"]["aux_context_window"],
87
+ config["generator_params"]["aux_context_window"])
88
+ c = mel
89
+ if self.scaler is not None:
90
+ c = self.scaler.transform(c)
91
+
92
+ with torch.no_grad():
93
+ z = torch.randn(1, 1, c.shape[0] * config["hop_size"]).to(device)
94
+ c = np.pad(c, (pad_size, (0, 0)), "edge")
95
+ c = torch.FloatTensor(c).unsqueeze(0).transpose(2, 1).to(device)
96
+ p = kwargs.get('f0')
97
+ if p is not None:
98
+ p = f0_to_coarse(p)
99
+ p = np.pad(p, (pad_size,), "edge")
100
+ p = torch.LongTensor(p[None, :]).to(device)
101
+ y = self.model(z, c, p).view(-1)
102
+ wav_out = y.cpu().numpy()
103
+ return wav_out
104
+
105
+ @staticmethod
106
+ def wav2spec(wav_fn, return_linear=False):
107
+ from data_gen.tts.data_gen_utils import process_utterance
108
+ res = process_utterance(
109
+ wav_fn, fft_size=hparams['fft_size'],
110
+ hop_size=hparams['hop_size'],
111
+ win_length=hparams['win_size'],
112
+ num_mels=hparams['audio_num_mel_bins'],
113
+ fmin=hparams['fmin'],
114
+ fmax=hparams['fmax'],
115
+ sample_rate=hparams['audio_sample_rate'],
116
+ loud_norm=hparams['loud_norm'],
117
+ min_level_db=hparams['min_level_db'],
118
+ return_linear=return_linear, vocoder='pwg', eps=float(hparams.get('wav2spec_eps', 1e-10)))
119
+ if return_linear:
120
+ return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft]
121
+ else:
122
+ return res[0], res[1].T
123
+
124
+ @staticmethod
125
+ def wav2mfcc(wav_fn):
126
+ fft_size = hparams['fft_size']
127
+ hop_size = hparams['hop_size']
128
+ win_length = hparams['win_size']
129
+ sample_rate = hparams['audio_sample_rate']
130
+ wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
131
+ mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
132
+ n_fft=fft_size, hop_length=hop_size,
133
+ win_length=win_length, pad_mode="constant", power=1.0)
134
+ mfcc_delta = librosa.feature.delta(mfcc, order=1)
135
+ mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
136
+ mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
137
+ return mfcc
vocoders/vocoder_utils.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import librosa
2
+
3
+ from utils.hparams import hparams
4
+ import numpy as np
5
+
6
+
7
+ def denoise(wav, v=0.1):
8
+ spec = librosa.stft(y=wav, n_fft=hparams['fft_size'], hop_length=hparams['hop_size'],
9
+ win_length=hparams['win_size'], pad_mode='constant')
10
+ spec_m = np.abs(spec)
11
+ spec_m = np.clip(spec_m - v, a_min=0, a_max=None)
12
+ spec_a = np.angle(spec)
13
+
14
+ return librosa.istft(spec_m * np.exp(1j * spec_a), hop_length=hparams['hop_size'],
15
+ win_length=hparams['win_size'])