王兮楼 commited on
Commit
ef1c0cb
1 Parent(s): d1ceeb9

add project files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. DATA.MD +42 -0
  3. DATA_EN.MD +46 -0
  4. LICENSE +201 -0
  5. OUTPUT_MODEL/.DS_Store +0 -0
  6. OUTPUT_MODEL/G_34000.pth +3 -0
  7. OUTPUT_MODEL/G_latest.pth +3 -0
  8. OUTPUT_MODEL/config.json +151 -0
  9. OUTPUT_MODEL/eval/events.out.tfevents.1679210307.4ff849f8f2f9.33407.1 +3 -0
  10. OUTPUT_MODEL/eval/events.out.tfevents.1679228480.4ff849f8f2f9.108569.1 +3 -0
  11. OUTPUT_MODEL/eval/events.out.tfevents.1679228797.4ff849f8f2f9.109987.1 +3 -0
  12. OUTPUT_MODEL/eval/events.out.tfevents.1679242558.9c8b6e39e5c7.3485.1 +3 -0
  13. OUTPUT_MODEL/eval/events.out.tfevents.1679275160.b022a1f57ff7.2702.1 +3 -0
  14. OUTPUT_MODEL/eval/events.out.tfevents.1679287878.c0c1548ed6cb.40976.1 +3 -0
  15. OUTPUT_MODEL/githash +1 -0
  16. OUTPUT_MODEL/train.log +0 -0
  17. README.md +55 -12
  18. README_ZH.md +60 -0
  19. __pycache__/attentions.cpython-39.pyc +0 -0
  20. __pycache__/commons.cpython-39.pyc +0 -0
  21. __pycache__/data_utils.cpython-39.pyc +0 -0
  22. __pycache__/losses.cpython-39.pyc +0 -0
  23. __pycache__/mel_processing.cpython-39.pyc +0 -0
  24. __pycache__/models.cpython-39.pyc +0 -0
  25. __pycache__/modules.cpython-39.pyc +0 -0
  26. __pycache__/transforms.cpython-39.pyc +0 -0
  27. __pycache__/utils.cpython-39.pyc +0 -0
  28. attentions.py +303 -0
  29. commons.py +164 -0
  30. configs/finetune_speaker.json +55 -0
  31. configs/modified_finetune_speaker.json +151 -0
  32. configs/uma_trilingual.json +54 -0
  33. data_utils.py +267 -0
  34. denoise_audio.py +18 -0
  35. download_model.py +4 -0
  36. download_video.py +37 -0
  37. final_annotation_train.txt +0 -0
  38. final_annotation_val.txt +0 -0
  39. finetune_speaker.json +151 -0
  40. finetune_speaker_v2.py +323 -0
  41. long_audio_transcribe.py +71 -0
  42. losses.py +61 -0
  43. mel_processing.py +112 -0
  44. models.py +533 -0
  45. models_infer.py +402 -0
  46. modules.py +390 -0
  47. preprocess_v2.py +151 -0
  48. rearrange_speaker.py +37 -0
  49. requirements.txt +24 -0
  50. sampled_audio4ft.txt +0 -0
.DS_Store ADDED
Binary file (10.2 kB). View file
 
DATA.MD ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 本仓库的pipeline支持多种声音样本上传方式,您只需根据您所持有的样本选择任意一种或其中几种即可。
2
+
3
+ 1.`.zip`文件打包的,按角色名排列的短音频,该压缩文件结构应如下所示:
4
+ ```
5
+ Your-zip-file.zip
6
+ ├───Character_name_1
7
+ ├ ├───xxx.wav
8
+ ├ ├───...
9
+ ├ ├───yyy.mp3
10
+ ├ └───zzz.wav
11
+ ├───Character_name_2
12
+ ├ ├───xxx.wav
13
+ ├ ├───...
14
+ ├ ├───yyy.mp3
15
+ ├ └───zzz.wav
16
+ ├───...
17
+
18
+ └───Character_name_n
19
+ ├───xxx.wav
20
+ ├───...
21
+ ├───yyy.mp3
22
+ └───zzz.wav
23
+ ```
24
+ 注意音频的格式和名称都不重要,只要它们是音频文件。
25
+ 质量要求:2秒以上,10秒以内,尽量不要有背景噪音。
26
+ 数量要求:一个角色至少10条,最好每个角色20条以上。
27
+ 2. 以角色名命名的长音频文件,音频内只能有单说话人,背景音会被自动去除。命名格式为:`{CharacterName}_{random_number}.wav`
28
+ (例如:`Diana_234135.wav`, `MinatoAqua_234252.wav`),必须是`.wav`文件,长度要在20分钟以内(否则会内存不足)。
29
+
30
+ 3. 以角色名命名的长视频文件,视频内只能有单说话人,背景音会被自动去除。命名格式为:`{CharacterName}_{random_number}.mp4`
31
+ (例如:`Taffy_332452.mp4`, `Dingzhen_957315.mp4`),必须是`.mp4`文件,长度要在20分钟以内(否则会内存不足)。
32
+ 注意:命名中,`CharacterName`必须是英文字符,`random_number`是为了区分同一个角色的多个文件,必须要添加,该数字可以为0~999999之间的任意整数。
33
+
34
+ 4. 包含多行`{CharacterName}|{video_url}`的`.txt`文件,格式应如下所示:
35
+ ```
36
+ Char1|https://xyz.com/video1/
37
+ Char2|https://xyz.com/video2/
38
+ Char2|https://xyz.com/video3/
39
+ Char3|https://xyz.com/video4/
40
+ ```
41
+ 视频内只能有单说话人,背景音会被自动去除。目前仅支持来自bilibili的视频,其它网站视频的url还没测试过。
42
+ 若对格式有疑问,可以在[这里](https://drive.google.com/file/d/132l97zjanpoPY4daLgqXoM7HKXPRbS84/view?usp=sharing)找到所有格式对应的数据样本。
DATA_EN.MD ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The pipeline of this repo supports multiple voice uploading options,you can choose one or more options depending on the data you have.
2
+
3
+ 1. Short audios packed by a single `.zip` file, whose file structure should be as shown below:
4
+ ```
5
+ Your-zip-file.zip
6
+ ├───Character_name_1
7
+ ├ ├───xxx.wav
8
+ ├ ├───...
9
+ ├ ├───yyy.mp3
10
+ ├ └───zzz.wav
11
+ ├───Character_name_2
12
+ ├ ├───xxx.wav
13
+ ├ ├───...
14
+ ├ ├───yyy.mp3
15
+ ├ └───zzz.wav
16
+ ├───...
17
+
18
+ └───Character_name_n
19
+ ├───xxx.wav
20
+ ├───...
21
+ ├───yyy.mp3
22
+ └───zzz.wav
23
+ ```
24
+ Note that the format of the audio files does not matter as long as they are audio files。
25
+ Quality requirement: >=2s, <=10s, contain as little background sound as possible.
26
+ Quantity requirement: at least 10 per character, 20+ per character is recommended.
27
+ 2. Long audio files named by character names, which should contain single character voice only. Background sound is
28
+ acceptable since they will be automatically removed. File name format `{CharacterName}_{random_number}.wav`
29
+ (E.G. `Diana_234135.wav`, `MinatoAqua_234252.wav`), must be `.wav` files.
30
+
31
+
32
+ 3. Long video files named by character names, which should contain single character voice only. Background sound is
33
+ acceptable since they will be automatically removed. File name format `{CharacterName}_{random_number}.mp4`
34
+ (E.G. `Taffy_332452.mp4`, `Dingzhen_957315.mp4`), must be `.mp4` files.
35
+ Note: `CharacterName` must be English characters only, `random_number` is to identify multiple files for one character,
36
+ which is compulsory to add. It could be a random integer between 0~999999.
37
+
38
+ 4. A `.txt` containing multiple lines of`{CharacterName}|{video_url}`, which should be formatted as follows:
39
+ ```
40
+ Char1|https://xyz.com/video1/
41
+ Char2|https://xyz.com/video2/
42
+ Char2|https://xyz.com/video3/
43
+ Char3|https://xyz.com/video4/
44
+ ```
45
+ One video should contain single speaker only. Currently supports videos links from bilibili, other websites are yet to be tested.
46
+ Having questions regarding to data format? Fine data samples of all format from [here](https://drive.google.com/file/d/132l97zjanpoPY4daLgqXoM7HKXPRbS84/view?usp=sharing).
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
OUTPUT_MODEL/.DS_Store ADDED
Binary file (6.15 kB). View file
 
OUTPUT_MODEL/G_34000.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0654c881ae70a147e77eed3d747a3c83986a7a9dfa717408122d806e55aae42f
3
+ size 158902093
OUTPUT_MODEL/G_latest.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:249c4fce2e79d05b386c8757172110abce30c113a3498a2b83314ef9e56d61ca
3
+ size 158903529
OUTPUT_MODEL/config.json ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": {
3
+ "log_interval": 100,
4
+ "eval_interval": 1000,
5
+ "seed": 1234,
6
+ "epochs": 10000,
7
+ "learning_rate": 0.0002,
8
+ "betas": [
9
+ 0.8,
10
+ 0.99
11
+ ],
12
+ "eps": 1e-09,
13
+ "batch_size": 16,
14
+ "fp16_run": true,
15
+ "lr_decay": 0.999875,
16
+ "segment_size": 8192,
17
+ "init_lr_ratio": 1,
18
+ "warmup_epochs": 0,
19
+ "c_mel": 45,
20
+ "c_kl": 1.0
21
+ },
22
+ "data": {
23
+ "training_files": "final_annotation_train.txt",
24
+ "validation_files": "final_annotation_val.txt",
25
+ "text_cleaners": [
26
+ "zh_ja_mixture_cleaners"
27
+ ],
28
+ "max_wav_value": 32768.0,
29
+ "sampling_rate": 22050,
30
+ "filter_length": 1024,
31
+ "hop_length": 256,
32
+ "win_length": 1024,
33
+ "n_mel_channels": 80,
34
+ "mel_fmin": 0.0,
35
+ "mel_fmax": null,
36
+ "add_blank": true,
37
+ "n_speakers": 7,
38
+ "cleaned_text": true
39
+ },
40
+ "model": {
41
+ "inter_channels": 192,
42
+ "hidden_channels": 192,
43
+ "filter_channels": 768,
44
+ "n_heads": 2,
45
+ "n_layers": 6,
46
+ "kernel_size": 3,
47
+ "p_dropout": 0.1,
48
+ "resblock": "1",
49
+ "resblock_kernel_sizes": [
50
+ 3,
51
+ 7,
52
+ 11
53
+ ],
54
+ "resblock_dilation_sizes": [
55
+ [
56
+ 1,
57
+ 3,
58
+ 5
59
+ ],
60
+ [
61
+ 1,
62
+ 3,
63
+ 5
64
+ ],
65
+ [
66
+ 1,
67
+ 3,
68
+ 5
69
+ ]
70
+ ],
71
+ "upsample_rates": [
72
+ 8,
73
+ 8,
74
+ 2,
75
+ 2
76
+ ],
77
+ "upsample_initial_channel": 512,
78
+ "upsample_kernel_sizes": [
79
+ 16,
80
+ 16,
81
+ 4,
82
+ 4
83
+ ],
84
+ "n_layers_q": 3,
85
+ "use_spectral_norm": false,
86
+ "gin_channels": 256
87
+ },
88
+ "speakers": {
89
+ "5": 0,
90
+ "0": 1,
91
+ "1": 2,
92
+ "2": 3,
93
+ "3": 4,
94
+ "4": 5,
95
+ "zhongli": 6
96
+ },
97
+ "symbols": [
98
+ "_",
99
+ ",",
100
+ ".",
101
+ "!",
102
+ "?",
103
+ "-",
104
+ "~",
105
+ "\u2026",
106
+ "A",
107
+ "E",
108
+ "I",
109
+ "N",
110
+ "O",
111
+ "Q",
112
+ "U",
113
+ "a",
114
+ "b",
115
+ "d",
116
+ "e",
117
+ "f",
118
+ "g",
119
+ "h",
120
+ "i",
121
+ "j",
122
+ "k",
123
+ "l",
124
+ "m",
125
+ "n",
126
+ "o",
127
+ "p",
128
+ "r",
129
+ "s",
130
+ "t",
131
+ "u",
132
+ "v",
133
+ "w",
134
+ "y",
135
+ "z",
136
+ "\u0283",
137
+ "\u02a7",
138
+ "\u02a6",
139
+ "\u026f",
140
+ "\u0279",
141
+ "\u0259",
142
+ "\u0265",
143
+ "\u207c",
144
+ "\u02b0",
145
+ "`",
146
+ "\u2192",
147
+ "\u2193",
148
+ "\u2191",
149
+ " "
150
+ ]
151
+ }
OUTPUT_MODEL/eval/events.out.tfevents.1679210307.4ff849f8f2f9.33407.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f753e25108a19b0fab8eed8e953e2ea951c63f1f2007bec200a7c133fc3fe443
3
+ size 4106744
OUTPUT_MODEL/eval/events.out.tfevents.1679228480.4ff849f8f2f9.108569.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45b4bc8c497058d3b908c86c8af62a73ba6188bc25bf6acb1d75ffa399a0ff0a
3
+ size 451184
OUTPUT_MODEL/eval/events.out.tfevents.1679228797.4ff849f8f2f9.109987.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac6cc2227727936397dda4f375e64f4d32495aee9eaf89f2ecaa28dbdf58f580
3
+ size 2386242
OUTPUT_MODEL/eval/events.out.tfevents.1679242558.9c8b6e39e5c7.3485.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b29bb3e059e6bfe1ab6382c6c6d4d838f8db6ebb973c61b8d2bacc22f2c45c44
3
+ size 608466
OUTPUT_MODEL/eval/events.out.tfevents.1679275160.b022a1f57ff7.2702.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4f985cb3650f674746bcabd7725fa75aa1d6ba787ee34bfbb1bdf24b305c650
3
+ size 40
OUTPUT_MODEL/eval/events.out.tfevents.1679287878.c0c1548ed6cb.40976.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:041aeec24a3f86ed4bc7204a7c1efe64ad542a5860e18d18fc42ebe757a9e421
3
+ size 5776731
OUTPUT_MODEL/githash ADDED
@@ -0,0 +1 @@
 
 
1
+ 7b4273f514877a89be072a56a7ff36d3afa2fed1
OUTPUT_MODEL/train.log ADDED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -1,12 +1,55 @@
1
- ---
2
- title: Zhenhuan VITS
3
- emoji: 📊
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [中文文档请点击这里](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README_ZH.md)
2
+ # VITS Fast Fine-tuning
3
+ This repo will guide you to add your own character voices, or even your own voice, into existing VITS TTS model
4
+ to make it able to do the following tasks in less than 1 hour:
5
+
6
+ 1. Many-to-many voice conversion between any characters you added & preset characters in the model.
7
+ 2. English, Japanese & Chinese Text-to-Speech synthesis with the characters you added & preset characters
8
+
9
+
10
+ Welcome to play around with the base models!
11
+ Chinese & English & Japanese:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer) Author: Me
12
+
13
+ Chinese & Japanese:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/sayashi/vits-uma-genshin-honkai) Author: [SayaSS](https://github.com/SayaSS)
14
+
15
+
16
+ ### Currently Supported Tasks:
17
+ - [x] Clone character voice from 10+ short audios
18
+ - [x] Clone character voice from long audio(s) >= 3 minutes (one audio should contain single speaker only)
19
+ - [x] Clone character voice from videos(s) >= 3 minutes (one video should contain single speaker only)
20
+ - [x] Clone character voice from BILIBILI video links (one video should contain single speaker only)
21
+
22
+ ### Currently Supported Characters for TTS & VC:
23
+ - [x] Any character you wish as long as you have their voices!
24
+ (Note that voice conversion can only be conducted between any two speakers in the model)
25
+
26
+
27
+
28
+ ## Fine-tuning
29
+ It's recommended to perform fine-tuning on [Google Colab](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)
30
+ because the original VITS has some dependencies that are difficult to configure.
31
+
32
+ ### How long does it take?
33
+ 1. Install dependencies (3 min)
34
+ 2. Choose pretrained model to start. The detailed differences between them are described in [Colab Notebook](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)
35
+ 3. Upload the voice samples of the characters you wish to add,see [DATA.MD](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/DATA_EN.MD) for detailed uploading options.
36
+ 4. Start fine-tuning. Time taken varies from 20 minutes ~ 2 hours, depending on the number of voices you uploaded.
37
+
38
+
39
+ ## Inference or Usage (Currently support Windows only)
40
+ 0. Remember to download your fine-tuned model!
41
+ 1. Download the latest release
42
+ 2. Put your model & config file into the folder `inference`, which are named `G_latest.pth` and `finetune_speaker.json`, respectively.
43
+ 3. The file structure should be as follows:
44
+ ```
45
+ inference
46
+ ├───inference.exe
47
+ ├───...
48
+ ├───finetune_speaker.json
49
+ └───G_latest.pth
50
+ ```
51
+ 4. run `inference.exe`, the browser should pop up automatically.
52
+
53
+ ## Use in MoeGoe
54
+ 0. Prepare downloaded model & config file, which are named `G_latest.pth` and `moegoe_config.json`, respectively.
55
+ 1. Follow [MoeGoe](https://github.com/CjangCjengh/MoeGoe) page instructions to install, configure path, and use.
README_ZH.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ English Documentation Please Click [here](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README.md)
2
+ # VITS 快速微调
3
+ 这个代码库会指导你如何将自定义角色(甚至你自己),加入预训练的VITS模型中,在1小时内的微调使模型具备如下功能:
4
+ 1. 在 模型所包含的任意两个角色 之间进行声线转换
5
+ 2. 以 你加入的角色声线 进行中日英三语 文本到语音合成。
6
+
7
+ 本项目使用的底模涵盖常见二次元男/女配音声线(来自原神数据集)以及现实世界常见男/女声线(来自VCTK数据集),支持中日英三语,保证能够在微调时快速适应新的声线。
8
+
9
+ 欢迎体验微调所使用的底模!
10
+
11
+ 中日英:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer) 作者:我
12
+
13
+ 中日:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/sayashi/vits-uma-genshin-honkai) 作者:[SayaSS](https://github.com/SayaSS)
14
+
15
+ ### 目前支持的任务:
16
+ - [x] 从 10条以上的短音频 克隆角色声音
17
+ - [x] 从 3分钟以上的长音频(单个音频只能包含单说话人) 克隆角色声音
18
+ - [x] 从 3分钟以上的视频(单个视频只能包含单说话人) 克隆角色声音
19
+ - [x] 通过输入 bilibili视频链接(单个视频只能包含单说话人) 克隆角色声音
20
+
21
+ ### 目前支持声线转换和中日英三语TTS的角色
22
+ - [x] 任意角色(只要你有角色的声音样本)
23
+ (注意:声线转换只能在任意两个存在于模型中的说话人之间进行)
24
+
25
+
26
+
27
+
28
+ ## 微调
29
+ 建议使用 [Google Colab](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)
30
+ 进行微调任务,因为VITS在多语言情况下的某些环境依赖相当难以配置。
31
+ ### 在Google Colab里,我需要花多长时间?
32
+ 1. 安装依赖 (3 min)
33
+ 2. 选择预训练模型,详细区别参见[Colab 笔记本页面](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)。
34
+ 3. 上传你希望加入的其它角色声音,详细上传方式见[DATA.MD](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/DATA.MD)
35
+ 4. 进行微调,根据选择的微调方式和样本数量不同,花费时长可能在20分钟到2小时不等。
36
+
37
+ 微调结束后可以直接下载微调好的模型,日后在本地运行(不需要GPU)
38
+
39
+ ## 本地运行和推理
40
+ 0. 记得下载微调好的模型和config文件!
41
+ 1. 下载最新的Release包(在Github页面的右侧)
42
+ 2. 把下载的模型和config文件放在 `inference`文件夹下, 其文件名分别为 `G_latest.pth` 和 `finetune_speaker.json`。
43
+ 3. 一切准备就绪后,文件结构应该如下所示:
44
+ ```
45
+ inference
46
+ ├───inference.exe
47
+ ├───...
48
+ ├───finetune_speaker.json
49
+ └───G_latest.pth
50
+ ```
51
+ 4. 运行 `inference.exe`, 浏览器会自动弹出窗口, 注意其所在路径不能有中文字符或者空格.
52
+
53
+ ## 在MoeGoe使用
54
+ 0. MoeGoe以及类似其它VITS推理UI使用的config格式略有不同,需要下载的文件为模型`G_latest.pth`和配置文件`moegoe_config.json`
55
+ 1. 按照[MoeGoe](https://github.com/CjangCjengh/MoeGoe)页面的提示配置路径即可使用。
56
+ 2. MoeGoe在输入句子时需要使用相应的语言标记包裹句子才能正常合成。(日语用[JA], 中文用[ZH], 英文用[EN]),例如:
57
+ [JA]こんにちわ。[JA]
58
+ [ZH]你好![ZH]
59
+ [EN]Hello![EN]
60
+
__pycache__/attentions.cpython-39.pyc ADDED
Binary file (9.57 kB). View file
 
__pycache__/commons.cpython-39.pyc ADDED
Binary file (5.87 kB). View file
 
__pycache__/data_utils.cpython-39.pyc ADDED
Binary file (8.58 kB). View file
 
__pycache__/losses.cpython-39.pyc ADDED
Binary file (1.53 kB). View file
 
__pycache__/mel_processing.cpython-39.pyc ADDED
Binary file (3.33 kB). View file
 
__pycache__/models.cpython-39.pyc ADDED
Binary file (15.2 kB). View file
 
__pycache__/modules.cpython-39.pyc ADDED
Binary file (11.4 kB). View file
 
__pycache__/transforms.cpython-39.pyc ADDED
Binary file (3.89 kB). View file
 
__pycache__/utils.cpython-39.pyc ADDED
Binary file (11.1 kB). View file
 
attentions.py ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import math
3
+ import numpy as np
4
+ import torch
5
+ from torch import nn
6
+ from torch.nn import functional as F
7
+
8
+ import commons
9
+ import modules
10
+ from modules import LayerNorm
11
+
12
+
13
+ class Encoder(nn.Module):
14
+ def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
15
+ super().__init__()
16
+ self.hidden_channels = hidden_channels
17
+ self.filter_channels = filter_channels
18
+ self.n_heads = n_heads
19
+ self.n_layers = n_layers
20
+ self.kernel_size = kernel_size
21
+ self.p_dropout = p_dropout
22
+ self.window_size = window_size
23
+
24
+ self.drop = nn.Dropout(p_dropout)
25
+ self.attn_layers = nn.ModuleList()
26
+ self.norm_layers_1 = nn.ModuleList()
27
+ self.ffn_layers = nn.ModuleList()
28
+ self.norm_layers_2 = nn.ModuleList()
29
+ for i in range(self.n_layers):
30
+ self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
31
+ self.norm_layers_1.append(LayerNorm(hidden_channels))
32
+ self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
33
+ self.norm_layers_2.append(LayerNorm(hidden_channels))
34
+
35
+ def forward(self, x, x_mask):
36
+ attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
37
+ x = x * x_mask
38
+ for i in range(self.n_layers):
39
+ y = self.attn_layers[i](x, x, attn_mask)
40
+ y = self.drop(y)
41
+ x = self.norm_layers_1[i](x + y)
42
+
43
+ y = self.ffn_layers[i](x, x_mask)
44
+ y = self.drop(y)
45
+ x = self.norm_layers_2[i](x + y)
46
+ x = x * x_mask
47
+ return x
48
+
49
+
50
+ class Decoder(nn.Module):
51
+ def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
52
+ super().__init__()
53
+ self.hidden_channels = hidden_channels
54
+ self.filter_channels = filter_channels
55
+ self.n_heads = n_heads
56
+ self.n_layers = n_layers
57
+ self.kernel_size = kernel_size
58
+ self.p_dropout = p_dropout
59
+ self.proximal_bias = proximal_bias
60
+ self.proximal_init = proximal_init
61
+
62
+ self.drop = nn.Dropout(p_dropout)
63
+ self.self_attn_layers = nn.ModuleList()
64
+ self.norm_layers_0 = nn.ModuleList()
65
+ self.encdec_attn_layers = nn.ModuleList()
66
+ self.norm_layers_1 = nn.ModuleList()
67
+ self.ffn_layers = nn.ModuleList()
68
+ self.norm_layers_2 = nn.ModuleList()
69
+ for i in range(self.n_layers):
70
+ self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
71
+ self.norm_layers_0.append(LayerNorm(hidden_channels))
72
+ self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
73
+ self.norm_layers_1.append(LayerNorm(hidden_channels))
74
+ self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
75
+ self.norm_layers_2.append(LayerNorm(hidden_channels))
76
+
77
+ def forward(self, x, x_mask, h, h_mask):
78
+ """
79
+ x: decoder input
80
+ h: encoder output
81
+ """
82
+ self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
83
+ encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
84
+ x = x * x_mask
85
+ for i in range(self.n_layers):
86
+ y = self.self_attn_layers[i](x, x, self_attn_mask)
87
+ y = self.drop(y)
88
+ x = self.norm_layers_0[i](x + y)
89
+
90
+ y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
91
+ y = self.drop(y)
92
+ x = self.norm_layers_1[i](x + y)
93
+
94
+ y = self.ffn_layers[i](x, x_mask)
95
+ y = self.drop(y)
96
+ x = self.norm_layers_2[i](x + y)
97
+ x = x * x_mask
98
+ return x
99
+
100
+
101
+ class MultiHeadAttention(nn.Module):
102
+ def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
103
+ super().__init__()
104
+ assert channels % n_heads == 0
105
+
106
+ self.channels = channels
107
+ self.out_channels = out_channels
108
+ self.n_heads = n_heads
109
+ self.p_dropout = p_dropout
110
+ self.window_size = window_size
111
+ self.heads_share = heads_share
112
+ self.block_length = block_length
113
+ self.proximal_bias = proximal_bias
114
+ self.proximal_init = proximal_init
115
+ self.attn = None
116
+
117
+ self.k_channels = channels // n_heads
118
+ self.conv_q = nn.Conv1d(channels, channels, 1)
119
+ self.conv_k = nn.Conv1d(channels, channels, 1)
120
+ self.conv_v = nn.Conv1d(channels, channels, 1)
121
+ self.conv_o = nn.Conv1d(channels, out_channels, 1)
122
+ self.drop = nn.Dropout(p_dropout)
123
+
124
+ if window_size is not None:
125
+ n_heads_rel = 1 if heads_share else n_heads
126
+ rel_stddev = self.k_channels**-0.5
127
+ self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
128
+ self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
129
+
130
+ nn.init.xavier_uniform_(self.conv_q.weight)
131
+ nn.init.xavier_uniform_(self.conv_k.weight)
132
+ nn.init.xavier_uniform_(self.conv_v.weight)
133
+ if proximal_init:
134
+ with torch.no_grad():
135
+ self.conv_k.weight.copy_(self.conv_q.weight)
136
+ self.conv_k.bias.copy_(self.conv_q.bias)
137
+
138
+ def forward(self, x, c, attn_mask=None):
139
+ q = self.conv_q(x)
140
+ k = self.conv_k(c)
141
+ v = self.conv_v(c)
142
+
143
+ x, self.attn = self.attention(q, k, v, mask=attn_mask)
144
+
145
+ x = self.conv_o(x)
146
+ return x
147
+
148
+ def attention(self, query, key, value, mask=None):
149
+ # reshape [b, d, t] -> [b, n_h, t, d_k]
150
+ b, d, t_s, t_t = (*key.size(), query.size(2))
151
+ query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
152
+ key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
153
+ value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
154
+
155
+ scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
156
+ if self.window_size is not None:
157
+ assert t_s == t_t, "Relative attention is only available for self-attention."
158
+ key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
159
+ rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
160
+ scores_local = self._relative_position_to_absolute_position(rel_logits)
161
+ scores = scores + scores_local
162
+ if self.proximal_bias:
163
+ assert t_s == t_t, "Proximal bias is only available for self-attention."
164
+ scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
165
+ if mask is not None:
166
+ scores = scores.masked_fill(mask == 0, -1e4)
167
+ if self.block_length is not None:
168
+ assert t_s == t_t, "Local attention is only available for self-attention."
169
+ block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
170
+ scores = scores.masked_fill(block_mask == 0, -1e4)
171
+ p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
172
+ p_attn = self.drop(p_attn)
173
+ output = torch.matmul(p_attn, value)
174
+ if self.window_size is not None:
175
+ relative_weights = self._absolute_position_to_relative_position(p_attn)
176
+ value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
177
+ output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
178
+ output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
179
+ return output, p_attn
180
+
181
+ def _matmul_with_relative_values(self, x, y):
182
+ """
183
+ x: [b, h, l, m]
184
+ y: [h or 1, m, d]
185
+ ret: [b, h, l, d]
186
+ """
187
+ ret = torch.matmul(x, y.unsqueeze(0))
188
+ return ret
189
+
190
+ def _matmul_with_relative_keys(self, x, y):
191
+ """
192
+ x: [b, h, l, d]
193
+ y: [h or 1, m, d]
194
+ ret: [b, h, l, m]
195
+ """
196
+ ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
197
+ return ret
198
+
199
+ def _get_relative_embeddings(self, relative_embeddings, length):
200
+ max_relative_position = 2 * self.window_size + 1
201
+ # Pad first before slice to avoid using cond ops.
202
+ pad_length = max(length - (self.window_size + 1), 0)
203
+ slice_start_position = max((self.window_size + 1) - length, 0)
204
+ slice_end_position = slice_start_position + 2 * length - 1
205
+ if pad_length > 0:
206
+ padded_relative_embeddings = F.pad(
207
+ relative_embeddings,
208
+ commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
209
+ else:
210
+ padded_relative_embeddings = relative_embeddings
211
+ used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
212
+ return used_relative_embeddings
213
+
214
+ def _relative_position_to_absolute_position(self, x):
215
+ """
216
+ x: [b, h, l, 2*l-1]
217
+ ret: [b, h, l, l]
218
+ """
219
+ batch, heads, length, _ = x.size()
220
+ # Concat columns of pad to shift from relative to absolute indexing.
221
+ x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
222
+
223
+ # Concat extra elements so to add up to shape (len+1, 2*len-1).
224
+ x_flat = x.view([batch, heads, length * 2 * length])
225
+ x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
226
+
227
+ # Reshape and slice out the padded elements.
228
+ x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
229
+ return x_final
230
+
231
+ def _absolute_position_to_relative_position(self, x):
232
+ """
233
+ x: [b, h, l, l]
234
+ ret: [b, h, l, 2*l-1]
235
+ """
236
+ batch, heads, length, _ = x.size()
237
+ # padd along column
238
+ x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
239
+ x_flat = x.view([batch, heads, length**2 + length*(length -1)])
240
+ # add 0's in the beginning that will skew the elements after reshape
241
+ x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
242
+ x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
243
+ return x_final
244
+
245
+ def _attention_bias_proximal(self, length):
246
+ """Bias for self-attention to encourage attention to close positions.
247
+ Args:
248
+ length: an integer scalar.
249
+ Returns:
250
+ a Tensor with shape [1, 1, length, length]
251
+ """
252
+ r = torch.arange(length, dtype=torch.float32)
253
+ diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
254
+ return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
255
+
256
+
257
+ class FFN(nn.Module):
258
+ def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
259
+ super().__init__()
260
+ self.in_channels = in_channels
261
+ self.out_channels = out_channels
262
+ self.filter_channels = filter_channels
263
+ self.kernel_size = kernel_size
264
+ self.p_dropout = p_dropout
265
+ self.activation = activation
266
+ self.causal = causal
267
+
268
+ if causal:
269
+ self.padding = self._causal_padding
270
+ else:
271
+ self.padding = self._same_padding
272
+
273
+ self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
274
+ self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
275
+ self.drop = nn.Dropout(p_dropout)
276
+
277
+ def forward(self, x, x_mask):
278
+ x = self.conv_1(self.padding(x * x_mask))
279
+ if self.activation == "gelu":
280
+ x = x * torch.sigmoid(1.702 * x)
281
+ else:
282
+ x = torch.relu(x)
283
+ x = self.drop(x)
284
+ x = self.conv_2(self.padding(x * x_mask))
285
+ return x * x_mask
286
+
287
+ def _causal_padding(self, x):
288
+ if self.kernel_size == 1:
289
+ return x
290
+ pad_l = self.kernel_size - 1
291
+ pad_r = 0
292
+ padding = [[0, 0], [0, 0], [pad_l, pad_r]]
293
+ x = F.pad(x, commons.convert_pad_shape(padding))
294
+ return x
295
+
296
+ def _same_padding(self, x):
297
+ if self.kernel_size == 1:
298
+ return x
299
+ pad_l = (self.kernel_size - 1) // 2
300
+ pad_r = self.kernel_size // 2
301
+ padding = [[0, 0], [0, 0], [pad_l, pad_r]]
302
+ x = F.pad(x, commons.convert_pad_shape(padding))
303
+ return x
commons.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import numpy as np
3
+ import torch
4
+ from torch import nn
5
+ from torch.nn import functional as F
6
+
7
+
8
+ def init_weights(m, mean=0.0, std=0.01):
9
+ classname = m.__class__.__name__
10
+ if classname.find("Conv") != -1:
11
+ m.weight.data.normal_(mean, std)
12
+
13
+
14
+ def get_padding(kernel_size, dilation=1):
15
+ return int((kernel_size*dilation - dilation)/2)
16
+
17
+
18
+ def convert_pad_shape(pad_shape):
19
+ l = pad_shape[::-1]
20
+ pad_shape = [item for sublist in l for item in sublist]
21
+ return pad_shape
22
+
23
+
24
+ def intersperse(lst, item):
25
+ result = [item] * (len(lst) * 2 + 1)
26
+ result[1::2] = lst
27
+ return result
28
+
29
+
30
+ def kl_divergence(m_p, logs_p, m_q, logs_q):
31
+ """KL(P||Q)"""
32
+ kl = (logs_q - logs_p) - 0.5
33
+ kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
34
+ return kl
35
+
36
+
37
+ def rand_gumbel(shape):
38
+ """Sample from the Gumbel distribution, protect from overflows."""
39
+ uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
40
+ return -torch.log(-torch.log(uniform_samples))
41
+
42
+
43
+ def rand_gumbel_like(x):
44
+ g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
45
+ return g
46
+
47
+
48
+ def slice_segments(x, ids_str, segment_size=4):
49
+ ret = torch.zeros_like(x[:, :, :segment_size])
50
+ for i in range(x.size(0)):
51
+ idx_str = ids_str[i]
52
+ idx_end = idx_str + segment_size
53
+ try:
54
+ ret[i] = x[i, :, idx_str:idx_end]
55
+ except RuntimeError:
56
+ print("?")
57
+ return ret
58
+
59
+
60
+ def rand_slice_segments(x, x_lengths=None, segment_size=4):
61
+ b, d, t = x.size()
62
+ if x_lengths is None:
63
+ x_lengths = t
64
+ ids_str_max = x_lengths - segment_size + 1
65
+ ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
66
+ ret = slice_segments(x, ids_str, segment_size)
67
+ return ret, ids_str
68
+
69
+
70
+ def get_timing_signal_1d(
71
+ length, channels, min_timescale=1.0, max_timescale=1.0e4):
72
+ position = torch.arange(length, dtype=torch.float)
73
+ num_timescales = channels // 2
74
+ log_timescale_increment = (
75
+ math.log(float(max_timescale) / float(min_timescale)) /
76
+ (num_timescales - 1))
77
+ inv_timescales = min_timescale * torch.exp(
78
+ torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
79
+ scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
80
+ signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
81
+ signal = F.pad(signal, [0, 0, 0, channels % 2])
82
+ signal = signal.view(1, channels, length)
83
+ return signal
84
+
85
+
86
+ def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
87
+ b, channels, length = x.size()
88
+ signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
89
+ return x + signal.to(dtype=x.dtype, device=x.device)
90
+
91
+
92
+ def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
93
+ b, channels, length = x.size()
94
+ signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
95
+ return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
96
+
97
+
98
+ def subsequent_mask(length):
99
+ mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
100
+ return mask
101
+
102
+
103
+ @torch.jit.script
104
+ def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
105
+ n_channels_int = n_channels[0]
106
+ in_act = input_a + input_b
107
+ t_act = torch.tanh(in_act[:, :n_channels_int, :])
108
+ s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
109
+ acts = t_act * s_act
110
+ return acts
111
+
112
+
113
+ def convert_pad_shape(pad_shape):
114
+ l = pad_shape[::-1]
115
+ pad_shape = [item for sublist in l for item in sublist]
116
+ return pad_shape
117
+
118
+
119
+ def shift_1d(x):
120
+ x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
121
+ return x
122
+
123
+
124
+ def sequence_mask(length, max_length=None):
125
+ if max_length is None:
126
+ max_length = length.max()
127
+ x = torch.arange(max_length, dtype=length.dtype, device=length.device)
128
+ return x.unsqueeze(0) < length.unsqueeze(1)
129
+
130
+
131
+ def generate_path(duration, mask):
132
+ """
133
+ duration: [b, 1, t_x]
134
+ mask: [b, 1, t_y, t_x]
135
+ """
136
+ device = duration.device
137
+
138
+ b, _, t_y, t_x = mask.shape
139
+ cum_duration = torch.cumsum(duration, -1)
140
+
141
+ cum_duration_flat = cum_duration.view(b * t_x)
142
+ path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
143
+ path = path.view(b, t_x, t_y)
144
+ path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
145
+ path = path.unsqueeze(1).transpose(2,3) * mask
146
+ return path
147
+
148
+
149
+ def clip_grad_value_(parameters, clip_value, norm_type=2):
150
+ if isinstance(parameters, torch.Tensor):
151
+ parameters = [parameters]
152
+ parameters = list(filter(lambda p: p.grad is not None, parameters))
153
+ norm_type = float(norm_type)
154
+ if clip_value is not None:
155
+ clip_value = float(clip_value)
156
+
157
+ total_norm = 0
158
+ for p in parameters:
159
+ param_norm = p.grad.data.norm(norm_type)
160
+ total_norm += param_norm.item() ** norm_type
161
+ if clip_value is not None:
162
+ p.grad.data.clamp_(min=-clip_value, max=clip_value)
163
+ total_norm = total_norm ** (1. / norm_type)
164
+ return total_norm
configs/finetune_speaker.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": {
3
+ "log_interval": 200,
4
+ "eval_interval": 1000,
5
+ "seed": 1234,
6
+ "epochs": 10000,
7
+ "learning_rate": 2e-4,
8
+ "betas": [0.8, 0.99],
9
+ "eps": 1e-9,
10
+ "batch_size": 64,
11
+ "fp16_run": true,
12
+ "lr_decay": 0.999875,
13
+ "segment_size": 8192,
14
+ "init_lr_ratio": 1,
15
+ "warmup_epochs": 0,
16
+ "c_mel": 45,
17
+ "c_kl": 1.0
18
+ },
19
+ "data": {
20
+ "training_files":"filelists/uma_genshin_genshinjp_bh3_train.txt.cleaned",
21
+ "validation_files":"filelists/uma_genshin_genshinjp_bh3_val.txt.cleaned",
22
+ "text_cleaners":["zh_ja_mixture_cleaners"],
23
+ "max_wav_value": 32768.0,
24
+ "sampling_rate": 22050,
25
+ "filter_length": 1024,
26
+ "hop_length": 256,
27
+ "win_length": 1024,
28
+ "n_mel_channels": 80,
29
+ "mel_fmin": 0.0,
30
+ "mel_fmax": null,
31
+ "add_blank": true,
32
+ "n_speakers": 804,
33
+ "cleaned_text": true
34
+ },
35
+ "model": {
36
+ "inter_channels": 192,
37
+ "hidden_channels": 192,
38
+ "filter_channels": 768,
39
+ "n_heads": 2,
40
+ "n_layers": 6,
41
+ "kernel_size": 3,
42
+ "p_dropout": 0.1,
43
+ "resblock": "1",
44
+ "resblock_kernel_sizes": [3,7,11],
45
+ "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
46
+ "upsample_rates": [8,8,2,2],
47
+ "upsample_initial_channel": 512,
48
+ "upsample_kernel_sizes": [16,16,4,4],
49
+ "n_layers_q": 3,
50
+ "use_spectral_norm": false,
51
+ "gin_channels": 256
52
+ },
53
+ "speakers": ["\u7279\u522b\u5468", "\u65e0\u58f0\u94c3\u9e7f", "\u4e1c\u6d77\u5e1d\u7687\uff08\u5e1d\u5b9d\uff0c\u5e1d\u738b\uff09", "\u4e38\u5584\u65af\u57fa", "\u5bcc\u58eb\u5947\u8ff9", "\u5c0f\u6817\u5e3d", "\u9ec4\u91d1\u8239", "\u4f0f\u7279\u52a0", "\u5927\u548c\u8d64\u9aa5", "\u5927\u6811\u5feb\u8f66", "\u8349\u4e0a\u98de", "\u83f1\u4e9a\u9a6c\u900a", "\u76ee\u767d\u9ea6\u6606", "\u795e\u9e70", "\u597d\u6b4c\u5267", "\u6210\u7530\u767d\u4ec1", "\u9c81\u9053\u592b\u8c61\u5f81\uff08\u7687\u5e1d\uff09", "\u6c14\u69fd", "\u7231\u4e3d\u6570\u7801", "\u661f\u4e91\u5929\u7a7a", "\u7389\u85fb\u5341\u5b57", "\u7f8e\u5999\u59ff\u52bf", "\u7435\u7436\u6668\u5149", "\u6469\u8036\u91cd\u70ae", "\u66fc\u57ce\u8336\u5ea7", "\u7f8e\u6d66\u6ce2\u65c1", "\u76ee\u767d\u8d56\u6069", "\u83f1\u66d9", "\u96ea\u4e2d\u7f8e\u4eba", "\u7c73\u6d74", "\u827e\u5c3c\u65af\u98ce\u795e", "\u7231\u4e3d\u901f\u5b50\uff08\u7231\u4e3d\u5feb\u5b50\uff09", "\u7231\u6155\u7ec7\u59ec", "\u7a3b\u8377\u4e00", "\u80dc\u5229\u5956\u5238", "\u7a7a\u4e2d\u795e\u5bab", "\u8363\u8fdb\u95ea\u8000", "\u771f\u673a\u4f36", "\u5ddd\u4e0a\u516c\u4e3b", "\u9ec4\u91d1\u57ce\uff08\u9ec4\u91d1\u57ce\u5e02\uff09", "\u6a31\u82b1\u8fdb\u738b", "\u91c7\u73e0", "\u65b0\u5149\u98ce", "\u4e1c\u5546\u53d8\u9769", "\u8d85\u7ea7\u5c0f\u6d77\u6e7e", "\u9192\u76ee\u98de\u9e70\uff08\u5bc4\u5bc4\u5b50\uff09", "\u8352\u6f20\u82f1\u96c4", "\u4e1c\u701b\u4f50\u6566", "\u4e2d\u5c71\u5e86\u5178", "\u6210\u7530\u5927\u8fdb", "\u897f\u91ce\u82b1", "\u6625\u4e3d\uff08\u4e4c\u62c9\u62c9\uff09", "\u9752\u7af9\u56de\u5fc6", "\u5fae\u5149\u98de\u9a79", "\u7f8e\u4e3d\u5468\u65e5", "\u5f85\u517c\u798f\u6765", "mr cb\uff08cb\u5148\u751f\uff09", "\u540d\u5c06\u6012\u6d9b\uff08\u540d\u5c06\u6237\u4ec1\uff09", "\u76ee\u767d\u591a\u4f2f", "\u4f18\u79c0\u7d20\u8d28", "\u5e1d\u738b\u5149\u8f89", "\u5f85\u517c\u8bd7\u6b4c\u5267", "\u751f\u91ce\u72c4\u675c\u65af", "\u76ee\u767d\u5584\u4fe1", "\u5927\u62d3\u592a\u9633\u795e", "\u53cc\u6da1\u8f6e\uff08\u4e24\u7acb\u76f4\uff0c\u4e24\u55b7\u5c04\uff0c\u4e8c\u9505\u5934\uff0c\u9006\u55b7\u5c04\uff09", "\u91cc\u89c1\u5149\u94bb\uff08\u8428\u6258\u8bfa\u91d1\u521a\u77f3\uff09", "\u5317\u90e8\u7384\u9a79", "\u6a31\u82b1\u5343\u4ee3\u738b", "\u5929\u72fc\u661f\u8c61\u5f81", "\u76ee\u767d\u963f\u5c14\u4e39", "\u516b\u91cd\u65e0\u654c", "\u9e64\u4e38\u521a\u5fd7", "\u76ee\u767d\u5149\u660e", "\u6210\u7530\u62dc\u4ec1\uff08\u6210\u7530\u8def\uff09", "\u4e5f\u6587\u6444\u8f89", "\u5c0f\u6797\u5386\u5947", "\u5317\u6e2f\u706b\u5c71", "\u5947\u9510\u9a8f", "\u82e6\u6da9\u7cd6\u971c", "\u5c0f\u5c0f\u8695\u8327", "\u9a8f\u5ddd\u624b\u7eb2\uff08\u7eff\u5e3d\u6076\u9b54\uff09", "\u79cb\u5ddd\u5f25\u751f\uff08\u5c0f\u5c0f\u7406\u4e8b\u957f\uff09", "\u4e59\u540d\u53f2\u60a6\u5b50\uff08\u4e59\u540d\u8bb0\u8005\uff09", "\u6850\u751f\u9662\u8475", "\u5b89\u5fc3\u6cfd\u523a\u523a\u7f8e", "\u6a2b\u672c\u7406\u5b50", "\u795e\u91cc\u7eeb\u534e\uff08\u9f9f\u9f9f\uff09", "\u7434", "\u7a7a\uff08\u7a7a\u54e5\uff09", "\u4e3d\u838e", "\u8367\uff08\u8367\u59b9\uff09", "\u82ad\u82ad\u62c9", "\u51ef\u4e9a", "\u8fea\u5362\u514b", "\u96f7\u6cfd", "\u5b89\u67cf", "\u6e29\u8fea", "\u9999\u83f1", "\u5317\u6597", "\u884c\u79cb", "\u9b48", "\u51dd\u5149", "\u53ef\u8389", "\u949f\u79bb", "\u83f2\u8c22\u5c14\uff08\u7687\u5973\uff09", "\u73ed\u5c3c\u7279", "\u8fbe\u8fbe\u5229\u4e9a\uff08\u516c\u5b50\uff09", "\u8bfa\u827e\u5c14\uff08\u5973\u4ec6\uff09", "\u4e03\u4e03", "\u91cd\u4e91", "\u7518\u96e8\uff08\u6930\u7f8a\uff09", "\u963f\u8d1d\u591a", "\u8fea\u5965\u5a1c\uff08\u732b\u732b\uff09", "\u83ab\u5a1c", "\u523b\u6674", "\u7802\u7cd6", "\u8f9b\u7131", "\u7f57\u838e\u8389\u4e9a", "\u80e1\u6843", "\u67ab\u539f\u4e07\u53f6\uff08\u4e07\u53f6\uff09", "\u70df\u7eef", "\u5bb5\u5bab", "\u6258\u9a6c", "\u4f18\u83c8", "\u96f7\u7535\u5c06\u519b\uff08\u96f7\u795e\uff09", "\u65e9\u67da", "\u73ca\u745a\u5bab\u5fc3\u6d77\uff08\u5fc3\u6d77\uff0c\u6263\u6263\u7c73\uff09", "\u4e94\u90ce", "\u4e5d\u6761\u88df\u7f57", "\u8352\u6cf7\u4e00\u6597\uff08\u4e00\u6597\uff09", "\u57c3\u6d1b\u4f0a", "\u7533\u9e64", "\u516b\u91cd\u795e\u5b50\uff08\u795e\u5b50\uff09", "\u795e\u91cc\u7eeb\u4eba\uff08\u7eeb\u4eba\uff09", "\u591c\u5170", "\u4e45\u5c90\u5fcd", "\u9e7f\u91ce\u82d1\u5e73\u85cf", "\u63d0\u7eb3\u91cc", "\u67ef\u83b1", "\u591a\u8389", "\u4e91\u5807", "\u7eb3\u897f\u59b2\uff08\u8349\u795e\uff09", "\u6df1\u6e0a\u4f7f\u5f92", "\u59ae\u9732", "\u8d5b\u8bfa", "\u503a\u52a1\u5904\u7406\u4eba", "\u574e\u8482\u4e1d", "\u771f\u5f13\u5feb\u8f66", "\u79cb\u4eba", "\u671b\u65cf", "\u827e\u5c14\u83f2", "\u827e\u8389\u4e1d", "\u827e\u4f26", "\u963f\u6d1b\u74e6", "\u5929\u91ce", "\u5929\u76ee\u5341\u4e94", "\u611a\u4eba\u4f17-\u5b89\u5fb7\u70c8", "\u5b89\u987a", "\u5b89\u897f", "\u8475", "\u9752\u6728", "\u8352\u5ddd\u5e78\u6b21", "\u8352\u8c37", "\u6709\u6cfd", "\u6d45\u5ddd", "\u9ebb\u7f8e", "\u51dd\u5149\u52a9\u624b", "\u963f\u6258", "\u7afa\u5b50", "\u767e\u8bc6", "\u767e\u95fb", "\u767e\u6653", "\u767d\u672f", "\u8d1d\u96c5\u7279\u4e3d\u5947", "\u4e3d\u5854", "\u5931\u843d\u8ff7\u8fed", "\u7f2d\u4e71\u661f\u68d8", "\u4f0a\u7538", "\u4f0f\u7279\u52a0\u5973\u5b69", "\u72c2\u70ed\u84dd\u8c03", "\u8389\u8389\u5a05", "\u841d\u838e\u8389\u5a05", "\u516b\u91cd\u6a31", "\u516b\u91cd\u971e", "\u5361\u83b2", "\u7b2c\u516d\u591c\u60f3\u66f2", "\u5361\u841d\u5c14", "\u59ec\u5b50", "\u6781\u5730\u6218\u5203", "\u5e03\u6d1b\u59ae\u5a05", "\u6b21\u751f\u94f6\u7ffc", "\u7406\u4e4b\u5f8b\u8005%26\u5e0c\u513f", "\u7406\u4e4b\u5f8b\u8005", "\u8ff7\u57ce\u9a87\u5154", "\u5e0c\u513f", "\u9b47\u591c\u661f\u6e0a", "\u9ed1\u5e0c\u513f", "\u5e15\u6735\u83f2\u8389\u4e1d", "\u4e0d\u706d\u661f\u951a", "\u5929\u5143\u9a91\u82f1", "\u5e7d\u5170\u9edb\u5c14", "\u6d3e\u8499bh3", "\u7231\u9171", "\u7eef\u7389\u4e38", "\u5fb7\u4e3d\u838e", "\u6708\u4e0b\u521d\u62e5", "\u6714\u591c\u89c2\u661f", "\u66ae\u5149\u9a91\u58eb", "\u683c\u857e\u4fee", "\u7559\u4e91\u501f\u98ce\u771f\u541b", "\u6885\u6bd4\u4e4c\u65af", "\u4eff\u72b9\u5927", "\u514b\u83b1\u56e0", "\u5723\u5251\u5e7d\u5170\u9edb\u5c14", "\u5996\u7cbe\u7231\u8389", "\u7279\u65af\u62c9zero", "\u82cd\u7384", "\u82e5\u6c34", "\u897f\u7433", "\u6234\u56e0\u65af\u96f7\u5e03", "\u8d1d\u62c9", "\u8d64\u9e22", "\u9547\u9b42\u6b4c", "\u6e21\u9e26", "\u4eba\u4e4b\u5f8b\u8005", "\u7231\u8389\u5e0c\u96c5", "\u5929\u7a79\u6e38\u4fa0", "\u742a\u4e9a\u5a1c", "\u7a7a\u4e4b\u5f8b\u8005", "\u85aa\u708e\u4e4b\u5f8b\u8005", "\u4e91\u58a8\u4e39\u5fc3", "\u7b26\u534e", "\u8bc6\u4e4b\u5f8b\u8005", "\u7279\u74e6\u6797", "\u7ef4\u5c14\u8587", "\u82bd\u8863", "\u96f7\u4e4b\u5f8b\u8005", "\u65ad\u7f6a\u5f71\u821e", "\u963f\u6ce2\u5c3c\u4e9a", "\u698e\u672c", "\u5384\u5c3c\u65af\u7279", "\u6076\u9f99", "\u8303\u4e8c\u7237", "\u6cd5\u62c9", "\u611a\u4eba\u4f17\u58eb\u5175", "\u611a\u4eba\u4f17\u58eb\u5175a", "\u611a\u4eba\u4f17\u58eb\u5175b", "\u611a\u4eba\u4f17\u58eb\u5175c", "\u611a\u4eba\u4f17a", "\u611a\u4eba\u4f17b", "\u98de\u98de", "\u83f2\u5229\u514b\u65af", "\u5973\u6027\u8ddf\u968f\u8005", "\u9022\u5ca9", "\u6446\u6e21\u4eba", "\u72c2\u8e81\u7684\u7537\u4eba", "\u5965\u5179", "\u8299\u841d\u62c9", "\u8ddf\u968f\u8005", "\u871c\u6c41\u751f\u7269", "\u9ec4\u9ebb\u5b50", "\u6e0a\u4e0a", "\u85e4\u6728", "\u6df1\u89c1", "\u798f\u672c", "\u8299\u84c9", "\u53e4\u6cfd", "\u53e4\u7530", "\u53e4\u5c71", "\u53e4\u8c37\u6607", "\u5085\u4e09\u513f", "\u9ad8\u8001\u516d", "\u77ff\u5de5\u5192", "\u5143\u592a", "\u5fb7\u5b89\u516c", "\u8302\u624d\u516c", "\u6770\u62c9\u5fb7", "\u845b\u7f57\u4e3d", "\u91d1\u5ffd\u5f8b", "\u516c\u4fca", "\u9505\u5df4", "\u6b4c\u5fb7", "\u963f\u8c6a", "\u72d7\u4e09\u513f", "\u845b\u745e\u4e1d", "\u82e5\u5fc3", "\u963f\u5c71\u5a46", "\u602a\u9e1f", "\u5e7f\u7af9", "\u89c2\u6d77", "\u5173\u5b8f", "\u871c\u6c41\u536b\u5175", "\u5b88\u536b1", "\u50b2\u6162\u7684\u5b88\u536b", "\u5bb3\u6015\u7684\u5b88\u536b", "\u8d35\u5b89", "\u76d6\u4f0a", "\u963f\u521b", "\u54c8\u592b\u4e39", "\u65e5\u8bed\u963f\u8d1d\u591a\uff08\u91ce\u5c9b\u5065\u513f\uff09", "\u65e5\u8bed\u57c3\u6d1b\u4f0a\uff08\u9ad8\u57a3\u5f69\u9633\uff09", "\u65e5\u8bed\u5b89\u67cf\uff08\u77f3\u89c1\u821e\u83dc\u9999\uff09", "\u65e5\u8bed\u795e\u91cc\u7eeb\u534e\uff08\u65e9\u89c1\u6c99\u7ec7\uff09", "\u65e5\u8bed\u795e\u91cc\u7eeb\u4eba\uff08\u77f3\u7530\u5f70\uff09", "\u65e5\u8bed\u767d\u672f\uff08\u6e38\u4f50\u6d69\u4e8c\uff09", "\u65e5\u8bed\u82ad\u82ad\u62c9\uff08\u9b3c\u5934\u660e\u91cc\uff09", "\u65e5\u8bed\u5317\u6597\uff08\u5c0f\u6e05\u6c34\u4e9a\u7f8e\uff09", "\u65e5\u8bed\u73ed\u5c3c\u7279\uff08\u9022\u5742\u826f\u592a\uff09", "\u65e5\u8bed\u574e\u8482\u4e1d\uff08\u67da\u6728\u51c9\u9999\uff09", "\u65e5\u8bed\u91cd\u4e91\uff08\u9f50\u85e4\u58ee\u9a6c\uff09", "\u65e5\u8bed\u67ef\u83b1\uff08\u524d\u5ddd\u51c9\u5b50\uff09", "\u65e5\u8bed\u8d5b\u8bfa\uff08\u5165\u91ce\u81ea\u7531\uff09", "\u65e5\u8bed\u6234\u56e0\u65af\u96f7\u5e03\uff08\u6d25\u7530\u5065\u6b21\u90ce\uff09", "\u65e5\u8bed\u8fea\u5362\u514b\uff08\u5c0f\u91ce\u8d24\u7ae0\uff09", "\u65e5\u8bed\u8fea\u5965\u5a1c\uff08\u4e95\u6cfd\u8bd7\u7ec7\uff09", "\u65e5\u8bed\u591a\u8389\uff08\u91d1\u7530\u670b\u5b50\uff09", "\u65e5\u8bed\u4f18\u83c8\uff08\u4f50\u85e4\u5229\u5948\uff09", "\u65e5\u8bed\u83f2\u8c22\u5c14\uff08\u5185\u7530\u771f\u793c\uff09", "\u65e5\u8bed\u7518\u96e8\uff08\u4e0a\u7530\u4e3d\u5948\uff09", "\u65e5\u8bed\uff08\u7560\u4e2d\u7950\uff09", "\u65e5\u8bed\u9e7f\u91ce\u9662\u5e73\u85cf\uff08\u4e95\u53e3\u7950\u4e00\uff09", "\u65e5\u8bed\u7a7a\uff08\u5800\u6c5f\u77ac\uff09", "\u65e5\u8bed\u8367\uff08\u60a0\u6728\u78a7\uff09", "\u65e5\u8bed\u80e1\u6843\uff08\u9ad8\u6865\u674e\u4f9d\uff09", "\u65e5\u8bed\u4e00\u6597\uff08\u897f\u5ddd\u8d35\u6559\uff09", "\u65e5\u8bed\u51ef\u4e9a\uff08\u9e1f\u6d77\u6d69\u8f85\uff09", "\u65e5\u8bed\u4e07\u53f6\uff08\u5c9b\u5d0e\u4fe1\u957f\uff09", "\u65e5\u8bed\u523b\u6674\uff08\u559c\u591a\u6751\u82f1\u68a8\uff09", "\u65e5\u8bed\u53ef\u8389\uff08\u4e45\u91ce\u7f8e\u54b2\uff09", "\u65e5\u8bed\u5fc3\u6d77\uff08\u4e09\u68ee\u94c3\u5b50\uff09", "\u65e5\u8bed\u4e5d\u6761\u88df\u7f57\uff08\u6fd1\u6237\u9ebb\u6c99\u7f8e\uff09", "\u65e5\u8bed\u4e3d\u838e\uff08\u7530\u4e2d\u7406\u60e0\uff09", "\u65e5\u8bed\u83ab\u5a1c\uff08\u5c0f\u539f\u597d\u7f8e\uff09", "\u65e5\u8bed\u7eb3\u897f\u59b2\uff08\u7530\u6751\u7531\u52a0\u8389\uff09", "\u65e5\u8bed\u59ae\u9732\uff08\u91d1\u5143\u5bff\u5b50\uff09", "\u65e5\u8bed\u51dd\u5149\uff08\u5927\u539f\u6c99\u8036\u9999\uff09", "\u65e5\u8bed\u8bfa\u827e\u5c14\uff08\u9ad8\u5c3e\u594f\u97f3\uff09", "\u65e5\u8bed\u5965\u5179\uff08\u589e\u8c37\u5eb7\u7eaa\uff09", "\u65e5\u8bed\u6d3e\u8499\uff08\u53e4\u8d3a\u8475\uff09", "\u65e5\u8bed\u7434\uff08\u658b\u85e4\u5343\u548c\uff09", "\u65e5\u8bed\u4e03\u4e03\uff08\u7530\u6751\u7531\u52a0\u8389\uff09", "\u65e5\u8bed\u96f7\u7535\u5c06\u519b\uff08\u6cfd\u57ce\u7f8e\u96ea\uff09", "\u65e5\u8bed\u96f7\u6cfd\uff08\u5185\u5c71\u6602\u8f89\uff09", "\u65e5\u8bed\u7f57\u838e\u8389\u4e9a\uff08\u52a0\u9688\u4e9a\u8863\uff09", "\u65e5\u8bed\u65e9\u67da\uff08\u6d32\u5d0e\u7eeb\uff09", "\u65e5\u8bed\u6563\u5175\uff08\u67ff\u539f\u5f7b\u4e5f\uff09", "\u65e5\u8bed\u7533\u9e64\uff08\u5ddd\u6f84\u7eeb\u5b50\uff09", "\u65e5\u8bed\u4e45\u5c90\u5fcd\uff08\u6c34\u6865\u9999\u7ec7\uff09", "\u65e5\u8bed\u5973\u58eb\uff08\u5e84\u5b50\u88d5\u8863\uff09", "\u65e5\u8bed\u7802\u7cd6\uff08\u85e4\u7530\u831c\uff09", "\u65e5\u8bed\u8fbe\u8fbe\u5229\u4e9a\uff08\u6728\u6751\u826f\u5e73\uff09", "\u65e5\u8bed\u6258\u9a6c\uff08\u68ee\u7530\u6210\u4e00\uff09", "\u65e5\u8bed\u63d0\u7eb3\u91cc\uff08\u5c0f\u6797\u6c99\u82d7\uff09", "\u65e5\u8bed\u6e29\u8fea\uff08\u6751\u6fd1\u6b65\uff09", "\u65e5\u8bed\u9999\u83f1\uff08\u5c0f\u6cfd\u4e9a\u674e\uff09", "\u65e5\u8bed\u9b48\uff08\u677e\u5188\u796f\u4e1e\uff09", "\u65e5\u8bed\u884c\u79cb\uff08\u7686\u5ddd\u7eaf\u5b50\uff09", "\u65e5\u8bed\u8f9b\u7131\uff08\u9ad8\u6865\u667a\u79cb\uff09", "\u65e5\u8bed\u516b\u91cd\u795e\u5b50\uff08\u4f50\u4ed3\u7eeb\u97f3\uff09", "\u65e5\u8bed\u70df\u7eef\uff08\u82b1\u5b88\u7531\u7f8e\u91cc\uff09", "\u65e5\u8bed\u591c\u5170\uff08\u8fdc\u85e4\u7eeb\uff09", "\u65e5\u8bed\u5bb5\u5bab\uff08\u690d\u7530\u4f73\u5948\uff09", "\u65e5\u8bed\u4e91\u5807\uff08\u5c0f\u5ca9\u4e95\u5c0f\u9e1f\uff09", "\u65e5\u8bed\u949f\u79bb\uff08\u524d\u91ce\u667a\u662d\uff09", "\u6770\u514b", "\u963f\u5409", "\u6c5f\u821f", "\u9274\u79cb", "\u5609\u4e49", "\u7eaa\u82b3", "\u666f\u6f84", "\u7ecf\u7eb6", "\u666f\u660e", "\u664b\u4f18", "\u963f\u9e20", "\u9152\u5ba2", "\u4e54\u5c14", "\u4e54\u745f\u592b", "\u7ea6\u987f", "\u4e54\u4f0a\u65af", "\u5c45\u5b89", "\u541b\u541b", "\u987a\u5409", "\u7eaf\u4e5f", "\u91cd\u4f50", "\u5927\u5c9b\u7eaf\u5e73", "\u84b2\u6cfd", "\u52d8\u89e3\u7531\u5c0f\u8def\u5065\u4e09\u90ce", "\u67ab", "\u67ab\u539f\u4e49\u5e86", "\u836b\u5c71", "\u7532\u6590\u7530\u9f8d\u99ac", "\u6d77\u6597", "\u60df\u795e\u6674\u4e4b\u4ecb", "\u9e7f\u91ce\u5948\u5948", "\u5361\u7435\u8389\u4e9a", "\u51ef\u745f\u7433", "\u52a0\u85e4\u4fe1\u609f", "\u52a0\u85e4\u6d0b\u5e73", "\u80dc\u5bb6", "\u8305\u847a\u4e00\u5e86", "\u548c\u662d", "\u4e00\u6b63", "\u4e00\u9053", "\u6842\u4e00", "\u5e86\u6b21\u90ce", "\u963f\u8d24", "\u5065\u53f8", "\u5065\u6b21\u90ce", "\u5065\u4e09\u90ce", "\u5929\u7406", "\u6740\u624ba", "\u6740\u624bb", "\u6728\u5357\u674f\u5948", "\u6728\u6751", "\u56fd\u738b", "\u6728\u4e0b", "\u5317\u6751", "\u6e05\u60e0", "\u6e05\u4eba", "\u514b\u5217\u95e8\u7279", "\u9a91\u58eb", "\u5c0f\u6797", "\u5c0f\u6625", "\u5eb7\u62c9\u5fb7", "\u5927\u8089\u4e38", "\u7434\u7f8e", "\u5b8f\u4e00", "\u5eb7\u4ecb", "\u5e78\u5fb7", "\u9ad8\u5584", "\u68a2", "\u514b\u7f57\u7d22", "\u4e45\u4fdd", "\u4e5d\u6761\u9570\u6cbb", "\u4e45\u6728\u7530", "\u6606\u94a7", "\u83ca\u5730\u541b", "\u4e45\u5229\u987b", "\u9ed1\u7530", "\u9ed1\u6cfd\u4eac\u4e4b\u4ecb", "\u54cd\u592a", "\u5c9a\u59d0", "\u5170\u6eaa", "\u6f9c\u9633", "\u52b3\u4f26\u65af", "\u4e50\u660e", "\u83b1\u8bfa", "\u83b2", "\u826f\u5b50", "\u674e\u5f53", "\u674e\u4e01", "\u5c0f\u4e50", "\u7075", "\u5c0f\u73b2", "\u7433\u7405a", "\u7433\u7405b", "\u5c0f\u5f6c", "\u5c0f\u5fb7", "\u5c0f\u697d", "\u5c0f\u9f99", "\u5c0f\u5434", "\u5c0f\u5434\u7684\u8bb0\u5fc6", "\u7406\u6b63", "\u963f\u9f99", "\u5362\u5361", "\u6d1b\u6210", "\u7f57\u5de7", "\u5317\u98ce\u72fc", "\u5362\u6b63", "\u840d\u59e5\u59e5", "\u524d\u7530", "\u771f\u663c", "\u9ebb\u7eaa", "\u771f", "\u611a\u4eba\u4f17-\u9a6c\u514b\u897f\u59c6", "\u5973\u6027a", "\u5973\u6027b", "\u5973\u6027a\u7684\u8ddf\u968f\u8005", "\u963f\u5b88", "\u739b\u683c\u4e3d\u7279", "\u771f\u7406", "\u739b\u4e54\u4e3d", "\u739b\u6587", "\u6b63\u80dc", "\u660c\u4fe1", "\u5c06\u53f8", "\u6b63\u4eba", "\u8def\u7237", "\u8001\u7ae0", "\u677e\u7530", "\u677e\u672c", "\u677e\u6d66", "\u677e\u5742", "\u8001\u5b5f", "\u5b5f\u4e39", "\u5546\u4eba\u968f\u4ece", "\u4f20\u4ee4\u5175", "\u7c73\u6b47\u5c14", "\u5fa1\u8206\u6e90\u4e00\u90ce", "\u5fa1\u8206\u6e90\u6b21\u90ce", "\u5343\u5ca9\u519b\u6559\u5934", "\u5343\u5ca9\u519b\u58eb\u5175", "\u660e\u535a", "\u660e\u4fca", "\u7f8e\u94c3", "\u7f8e\u548c", "\u963f\u5e78", "\u524a\u6708\u7b51\u9633\u771f\u541b", "\u94b1\u773c\u513f", "\u68ee\u5f66", "\u5143\u52a9", "\u7406\u6c34\u53e0\u5c71\u771f\u541b", "\u7406\u6c34\u758a\u5c71\u771f\u541b", "\u6731\u8001\u677f", "\u6728\u6728", "\u6751\u4e0a", "\u6751\u7530", "\u6c38\u91ce", "\u957f\u91ce\u539f\u9f99\u4e4b\u4ecb", "\u957f\u6fd1", "\u4e2d\u91ce\u5fd7\u4e43", "\u83dc\u83dc\u5b50", "\u6960\u6960", "\u6210\u6fd1", "\u963f\u5185", "\u5b81\u7984", "\u725b\u5fd7", "\u4fe1\u535a", "\u4f38\u592b", "\u91ce\u65b9", "\u8bfa\u62c9", "\u7eaa\u9999", "\u8bfa\u66fc", "\u4fee\u5973", "\u7eaf\u6c34\u7cbe\u7075", "\u5c0f\u5ddd", "\u5c0f\u4ed3\u6faa", "\u5188\u6797", "\u5188\u5d0e\u7ed8\u91cc\u9999", "\u5188\u5d0e\u9646\u6597", "\u5965\u62c9\u592b", "\u8001\u79d1", "\u9b3c\u5a46\u5a46", "\u5c0f\u91ce\u5bfa", "\u5927\u6cb3\u539f\u4e94\u53f3\u536b\u95e8", "\u5927\u4e45\u4fdd\u5927\u4ecb", "\u5927\u68ee", "\u5927\u52a9", "\u5965\u7279", "\u6d3e\u8499", "\u6d3e\u84992", "\u75c5\u4ebaa", "\u75c5\u4ebab", "\u5df4\u987f", "\u6d3e\u6069", "\u670b\u4e49", "\u56f4\u89c2\u7fa4\u4f17", "\u56f4\u89c2\u7fa4\u4f17a", "\u56f4\u89c2\u7fa4\u4f17b", "\u56f4\u89c2\u7fa4\u4f17c", "\u56f4\u89c2\u7fa4\u4f17d", "\u56f4\u89c2\u7fa4\u4f17e", "\u94dc\u96c0", "\u963f\u80a5", "\u5174\u53d4", "\u8001\u5468\u53d4", "\u516c\u4e3b", "\u5f7c\u5f97", "\u4e7e\u5b50", "\u828a\u828a", "\u4e7e\u73ae", "\u7eee\u547d", "\u675e\u5e73", "\u79cb\u6708", "\u6606\u6069", "\u96f7\u7535\u5f71", "\u5170\u9053\u5c14", "\u96f7\u8499\u5fb7", "\u5192\u5931\u7684\u5e15\u62c9\u5fb7", "\u4f36\u4e00", "\u73b2\u82b1", "\u963f\u4ec1", "\u5bb6\u81e3\u4eec", "\u68a8\u7ed8", "\u8363\u6c5f", "\u620e\u4e16", "\u6d6a\u4eba", "\u7f57\u4f0a\u65af", "\u5982\u610f", "\u51c9\u5b50", "\u5f69\u9999", "\u9152\u4e95", "\u5742\u672c", "\u6714\u6b21\u90ce", "\u6b66\u58eba", "\u6b66\u58ebb", "\u6b66\u58ebc", "\u6b66\u58ebd", "\u73ca\u745a", "\u4e09\u7530", "\u838e\u62c9", "\u7b39\u91ce", "\u806a\u7f8e", "\u806a", "\u5c0f\u767e\u5408", "\u6563\u5175", "\u5bb3\u6015\u7684\u5c0f\u5218", "\u8212\u4f2f\u7279", "\u8212\u8328", "\u6d77\u9f99", "\u4e16\u5b50", "\u8c22\u5c14\u76d6", "\u5bb6\u4e01", "\u5546\u534e", "\u6c99\u5bc5", "\u963f\u5347", "\u67f4\u7530", "\u963f\u8302", "\u5f0f\u5927\u5c06", "\u6e05\u6c34", "\u5fd7\u6751\u52d8\u5175\u536b", "\u65b0\u4e4b\u4e1e", "\u5fd7\u7ec7", "\u77f3\u5934", "\u8bd7\u7fbd", "\u8bd7\u7b60", "\u77f3\u58ee", "\u7fd4\u592a", "\u6b63\u4e8c", "\u5468\u5e73", "\u8212\u6768", "\u9f50\u683c\u8299\u4e3d\u96c5", "\u5973\u58eb", "\u601d\u52e4", "\u516d\u6307\u4e54\u745f", "\u611a\u4eba\u4f17\u5c0f\u5175d", "\u611a\u4eba\u4f17\u5c0f\u5175a", "\u611a\u4eba\u4f17\u5c0f\u5175b", "\u611a\u4eba\u4f17\u5c0f\u5175c", "\u5434\u8001\u4e94", "\u5434\u8001\u4e8c", "\u6ed1\u5934\u9b3c", "\u8a00\u7b11", "\u5434\u8001\u4e03", "\u58eb\u5175h", "\u58eb\u5175i", "\u58eb\u5175a", "\u58eb\u5175b", "\u58eb\u5175c", "\u58eb\u5175d", "\u58eb\u5175e", "\u58eb\u5175f", "\u58eb\u5175g", "\u594f\u592a", "\u65af\u5766\u5229", "\u6387\u661f\u652b\u8fb0\u5929\u541b", "\u5c0f\u5934", "\u5927\u6b66", "\u9676\u4e49\u9686", "\u6749\u672c", "\u82cf\u897f", "\u5acc\u7591\u4ebaa", "\u5acc\u7591\u4ebab", "\u5acc\u7591\u4ebac", "\u5acc\u7591\u4ebad", "\u65af\u4e07", "\u5251\u5ba2a", "\u5251\u5ba2b", "\u963f\u4e8c", "\u5fe0\u80dc", "\u5fe0\u592b", "\u963f\u656c", "\u5b5d\u5229", "\u9e70\u53f8\u8fdb", "\u9ad8\u5c71", "\u4e5d\u6761\u5b5d\u884c", "\u6bc5", "\u7af9\u5185", "\u62d3\u771f", "\u5353\u4e5f", "\u592a\u90ce\u4e38", "\u6cf0\u52d2", "\u624b\u5c9b", "\u54f2\u5e73", "\u54f2\u592b", "\u6258\u514b", "\u5927boss", "\u963f\u5f3a", "\u6258\u5c14\u5fb7\u62c9", "\u65c1\u89c2\u8005", "\u5929\u6210", "\u963f\u5927", "\u8482\u739b\u4e4c\u65af", "\u63d0\u7c73", "\u6237\u7530", "\u963f\u4e09", "\u4e00\u8d77\u7684\u4eba", "\u5fb7\u7530", "\u5fb7\u957f", "\u667a\u6811", "\u5229\u5f66", "\u80d6\u4e4e\u4e4e\u7684\u65c5\u884c\u8005", "\u85cf\u5b9d\u4ebaa", "\u85cf\u5b9d\u4ebab", "\u85cf\u5b9d\u4ebac", "\u85cf\u5b9d\u4ebad", "\u963f\u7947", "\u6052\u96c4", "\u9732\u5b50", "\u8bdd\u5267\u56e2\u56e2\u957f", "\u5185\u6751", "\u4e0a\u91ce", "\u4e0a\u6749", "\u8001\u6234", "\u8001\u9ad8", "\u8001\u8d3e", "\u8001\u58a8", "\u8001\u5b59", "\u5929\u67a2\u661f", "\u8001\u4e91", "\u6709\u4e50\u658b", "\u4e11\u96c4", "\u4e4c\u7ef4", "\u74e6\u4eac", "\u83f2\u5c14\u6208\u9edb\u7279", "\u7ef4\u591a\u5229\u4e9a", "\u8587\u5c14", "\u74e6\u683c\u7eb3", "\u963f\u5916", "\u4f8d\u5973", "\u74e6\u62c9", "\u671b\u96c5", "\u5b9b\u70df", "\u742c\u7389", "\u6218\u58eba", "\u6218\u58ebb", "\u6e21\u8fba", "\u6e21\u90e8", "\u963f\u4f1f", "\u6587\u749f", "\u6587\u6e0a", "\u97e6\u5c14\u7eb3", "\u738b\u6273\u624b", "\u6b66\u6c9b", "\u6653\u98de", "\u8f9b\u7a0b", "\u661f\u706b", "\u661f\u7a00", "\u8f9b\u79c0", "\u79c0\u534e", "\u963f\u65ed", "\u5f90\u5218\u5e08", "\u77e2\u90e8", "\u516b\u6728", "\u5c71\u4e0a", "\u963f\u9633", "\u989c\u7b11", "\u5eb7\u660e", "\u6cf0\u4e45", "\u5b89\u6b66", "\u77e2\u7530\u5e78\u559c", "\u77e2\u7530\u8f9b\u559c", "\u4e49\u575a", "\u83ba\u513f", "\u76c8\u4e30", "\u5b9c\u5e74", "\u94f6\u674f", "\u9038\u8f69", "\u6a2a\u5c71", "\u6c38\u8d35", "\u6c38\u4e1a", "\u5609\u4e45", "\u5409\u5ddd", "\u4e49\u9ad8", "\u7528\u9ad8", "\u9633\u592a", "\u5143\u84c9", "\u73a5\u8f89", "\u6bd3\u534e", "\u6709\u9999", "\u5e78\u4e5f", "\u7531\u771f", "\u7ed3\u83dc", "\u97f5\u5b81", "\u767e\u5408", "\u767e\u5408\u534e", "\u5c24\u82cf\u6ce2\u592b", "\u88d5\u5b50", "\u60a0\u7b56", "\u60a0\u4e5f", "\u4e8e\u5ae3", "\u67da\u5b50", "\u8001\u90d1", "\u6b63\u8302", "\u5fd7\u6210", "\u82b7\u5de7", "\u77e5\u6613", "\u652f\u652f", "\u5468\u826f", "\u73e0\u51fd", "\u795d\u660e", "\u795d\u6d9b"],
54
+ "symbols": ["_", ",", ".", "!", "?", "-", "~", "\u2026", "A", "E", "I", "N", "O", "Q", "U", "a", "b", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "r", "s", "t", "u", "v", "w", "y", "z", "\u0283", "\u02a7", "\u02a6", "\u026f", "\u0279", "\u0259", "\u0265", "\u207c", "\u02b0", "`", "\u2192", "\u2193", "\u2191", " "]
55
+ }
configs/modified_finetune_speaker.json ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": {
3
+ "log_interval": 100,
4
+ "eval_interval": 1000,
5
+ "seed": 1234,
6
+ "epochs": 10000,
7
+ "learning_rate": 0.0002,
8
+ "betas": [
9
+ 0.8,
10
+ 0.99
11
+ ],
12
+ "eps": 1e-09,
13
+ "batch_size": 16,
14
+ "fp16_run": true,
15
+ "lr_decay": 0.999875,
16
+ "segment_size": 8192,
17
+ "init_lr_ratio": 1,
18
+ "warmup_epochs": 0,
19
+ "c_mel": 45,
20
+ "c_kl": 1.0
21
+ },
22
+ "data": {
23
+ "training_files": "final_annotation_train.txt",
24
+ "validation_files": "final_annotation_val.txt",
25
+ "text_cleaners": [
26
+ "zh_ja_mixture_cleaners"
27
+ ],
28
+ "max_wav_value": 32768.0,
29
+ "sampling_rate": 22050,
30
+ "filter_length": 1024,
31
+ "hop_length": 256,
32
+ "win_length": 1024,
33
+ "n_mel_channels": 80,
34
+ "mel_fmin": 0.0,
35
+ "mel_fmax": null,
36
+ "add_blank": true,
37
+ "n_speakers": 7,
38
+ "cleaned_text": true
39
+ },
40
+ "model": {
41
+ "inter_channels": 192,
42
+ "hidden_channels": 192,
43
+ "filter_channels": 768,
44
+ "n_heads": 2,
45
+ "n_layers": 6,
46
+ "kernel_size": 3,
47
+ "p_dropout": 0.1,
48
+ "resblock": "1",
49
+ "resblock_kernel_sizes": [
50
+ 3,
51
+ 7,
52
+ 11
53
+ ],
54
+ "resblock_dilation_sizes": [
55
+ [
56
+ 1,
57
+ 3,
58
+ 5
59
+ ],
60
+ [
61
+ 1,
62
+ 3,
63
+ 5
64
+ ],
65
+ [
66
+ 1,
67
+ 3,
68
+ 5
69
+ ]
70
+ ],
71
+ "upsample_rates": [
72
+ 8,
73
+ 8,
74
+ 2,
75
+ 2
76
+ ],
77
+ "upsample_initial_channel": 512,
78
+ "upsample_kernel_sizes": [
79
+ 16,
80
+ 16,
81
+ 4,
82
+ 4
83
+ ],
84
+ "n_layers_q": 3,
85
+ "use_spectral_norm": false,
86
+ "gin_channels": 256
87
+ },
88
+ "speakers": {
89
+ "5": 0,
90
+ "0": 1,
91
+ "1": 2,
92
+ "2": 3,
93
+ "3": 4,
94
+ "4": 5,
95
+ "zhongli": 6
96
+ },
97
+ "symbols": [
98
+ "_",
99
+ ",",
100
+ ".",
101
+ "!",
102
+ "?",
103
+ "-",
104
+ "~",
105
+ "\u2026",
106
+ "A",
107
+ "E",
108
+ "I",
109
+ "N",
110
+ "O",
111
+ "Q",
112
+ "U",
113
+ "a",
114
+ "b",
115
+ "d",
116
+ "e",
117
+ "f",
118
+ "g",
119
+ "h",
120
+ "i",
121
+ "j",
122
+ "k",
123
+ "l",
124
+ "m",
125
+ "n",
126
+ "o",
127
+ "p",
128
+ "r",
129
+ "s",
130
+ "t",
131
+ "u",
132
+ "v",
133
+ "w",
134
+ "y",
135
+ "z",
136
+ "\u0283",
137
+ "\u02a7",
138
+ "\u02a6",
139
+ "\u026f",
140
+ "\u0279",
141
+ "\u0259",
142
+ "\u0265",
143
+ "\u207c",
144
+ "\u02b0",
145
+ "`",
146
+ "\u2192",
147
+ "\u2193",
148
+ "\u2191",
149
+ " "
150
+ ]
151
+ }
configs/uma_trilingual.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": {
3
+ "log_interval": 200,
4
+ "eval_interval": 1000,
5
+ "seed": 1234,
6
+ "epochs": 10000,
7
+ "learning_rate": 2e-4,
8
+ "betas": [0.8, 0.99],
9
+ "eps": 1e-9,
10
+ "batch_size": 16,
11
+ "fp16_run": true,
12
+ "lr_decay": 0.999875,
13
+ "segment_size": 8192,
14
+ "init_lr_ratio": 1,
15
+ "warmup_epochs": 0,
16
+ "c_mel": 45,
17
+ "c_kl": 1.0
18
+ },
19
+ "data": {
20
+ "training_files":"../CH_JA_EN_mix_voice/clipped_3_vits_trilingual_annotations.train.txt.cleaned",
21
+ "validation_files":"../CH_JA_EN_mix_voice/clipped_3_vits_trilingual_annotations.val.txt.cleaned",
22
+ "text_cleaners":["cjke_cleaners2"],
23
+ "max_wav_value": 32768.0,
24
+ "sampling_rate": 22050,
25
+ "filter_length": 1024,
26
+ "hop_length": 256,
27
+ "win_length": 1024,
28
+ "n_mel_channels": 80,
29
+ "mel_fmin": 0.0,
30
+ "mel_fmax": null,
31
+ "add_blank": true,
32
+ "n_speakers": 999,
33
+ "cleaned_text": true
34
+ },
35
+ "model": {
36
+ "inter_channels": 192,
37
+ "hidden_channels": 192,
38
+ "filter_channels": 768,
39
+ "n_heads": 2,
40
+ "n_layers": 6,
41
+ "kernel_size": 3,
42
+ "p_dropout": 0.1,
43
+ "resblock": "1",
44
+ "resblock_kernel_sizes": [3,7,11],
45
+ "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
46
+ "upsample_rates": [8,8,2,2],
47
+ "upsample_initial_channel": 512,
48
+ "upsample_kernel_sizes": [16,16,4,4],
49
+ "n_layers_q": 3,
50
+ "use_spectral_norm": false,
51
+ "gin_channels": 256
52
+ },
53
+ "symbols": ["_", ",", ".", "!", "?", "-", "~", "\u2026", "N", "Q", "a", "b", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "s", "t", "u", "v", "w", "x", "y", "z", "\u0251", "\u00e6", "\u0283", "\u0291", "\u00e7", "\u026f", "\u026a", "\u0254", "\u025b", "\u0279", "\u00f0", "\u0259", "\u026b", "\u0265", "\u0278", "\u028a", "\u027e", "\u0292", "\u03b8", "\u03b2", "\u014b", "\u0266", "\u207c", "\u02b0", "`", "^", "#", "*", "=", "\u02c8", "\u02cc", "\u2192", "\u2193", "\u2191", " "]
54
+ }
data_utils.py ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ import os
3
+ import random
4
+ import numpy as np
5
+ import torch
6
+ import torch.utils.data
7
+ import torchaudio
8
+
9
+ import commons
10
+ from mel_processing import spectrogram_torch
11
+ from utils import load_wav_to_torch, load_filepaths_and_text
12
+ from text import text_to_sequence, cleaned_text_to_sequence
13
+ """Multi speaker version"""
14
+
15
+
16
+ class TextAudioSpeakerLoader(torch.utils.data.Dataset):
17
+ """
18
+ 1) loads audio, speaker_id, text pairs
19
+ 2) normalizes text and converts them to sequences of integers
20
+ 3) computes spectrograms from audio files.
21
+ """
22
+
23
+ def __init__(self, audiopaths_sid_text, hparams, symbols):
24
+ self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
25
+ self.text_cleaners = hparams.text_cleaners
26
+ self.max_wav_value = hparams.max_wav_value
27
+ self.sampling_rate = hparams.sampling_rate
28
+ self.filter_length = hparams.filter_length
29
+ self.hop_length = hparams.hop_length
30
+ self.win_length = hparams.win_length
31
+ self.sampling_rate = hparams.sampling_rate
32
+
33
+ self.cleaned_text = getattr(hparams, "cleaned_text", False)
34
+
35
+ self.add_blank = hparams.add_blank
36
+ self.min_text_len = getattr(hparams, "min_text_len", 1)
37
+ self.max_text_len = getattr(hparams, "max_text_len", 190)
38
+ self.symbols = symbols
39
+
40
+ random.seed(1234)
41
+ random.shuffle(self.audiopaths_sid_text)
42
+ self._filter()
43
+
44
+ def _filter(self):
45
+ """
46
+ Filter text & store spec lengths
47
+ """
48
+ # Store spectrogram lengths for Bucketing
49
+ # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
50
+ # spec_length = wav_length // hop_length
51
+
52
+ audiopaths_sid_text_new = []
53
+ lengths = []
54
+ for audiopath, sid, text in self.audiopaths_sid_text:
55
+ # audiopath = "./user_voice/" + audiopath
56
+
57
+ if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
58
+ audiopaths_sid_text_new.append([audiopath, sid, text])
59
+ lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
60
+ self.audiopaths_sid_text = audiopaths_sid_text_new
61
+ self.lengths = lengths
62
+
63
+ def get_audio_text_speaker_pair(self, audiopath_sid_text):
64
+ # separate filename, speaker_id and text
65
+ audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
66
+ text = self.get_text(text)
67
+ spec, wav = self.get_audio(audiopath)
68
+ sid = self.get_sid(sid)
69
+ return (text, spec, wav, sid)
70
+
71
+ def get_audio(self, filename):
72
+ # audio, sampling_rate = load_wav_to_torch(filename)
73
+ # if sampling_rate != self.sampling_rate:
74
+ # raise ValueError("{} {} SR doesn't match target {} SR".format(
75
+ # sampling_rate, self.sampling_rate))
76
+ # audio_norm = audio / self.max_wav_value if audio.max() > 10 else audio
77
+ # audio_norm = audio_norm.unsqueeze(0)
78
+ audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True)
79
+ # spec_filename = filename.replace(".wav", ".spec.pt")
80
+ # if os.path.exists(spec_filename):
81
+ # spec = torch.load(spec_filename)
82
+ # else:
83
+ # try:
84
+ spec = spectrogram_torch(audio_norm, self.filter_length,
85
+ self.sampling_rate, self.hop_length, self.win_length,
86
+ center=False)
87
+ spec = spec.squeeze(0)
88
+ # except NotImplementedError:
89
+ # print("?")
90
+ # spec = torch.squeeze(spec, 0)
91
+ # torch.save(spec, spec_filename)
92
+ return spec, audio_norm
93
+
94
+ def get_text(self, text):
95
+ if self.cleaned_text:
96
+ text_norm = cleaned_text_to_sequence(text, self.symbols)
97
+ else:
98
+ text_norm = text_to_sequence(text, self.text_cleaners)
99
+ if self.add_blank:
100
+ text_norm = commons.intersperse(text_norm, 0)
101
+ text_norm = torch.LongTensor(text_norm)
102
+ return text_norm
103
+
104
+ def get_sid(self, sid):
105
+ sid = torch.LongTensor([int(sid)])
106
+ return sid
107
+
108
+ def __getitem__(self, index):
109
+ return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
110
+
111
+ def __len__(self):
112
+ return len(self.audiopaths_sid_text)
113
+
114
+
115
+ class TextAudioSpeakerCollate():
116
+ """ Zero-pads model inputs and targets
117
+ """
118
+
119
+ def __init__(self, return_ids=False):
120
+ self.return_ids = return_ids
121
+
122
+ def __call__(self, batch):
123
+ """Collate's training batch from normalized text, audio and speaker identities
124
+ PARAMS
125
+ ------
126
+ batch: [text_normalized, spec_normalized, wav_normalized, sid]
127
+ """
128
+ # Right zero-pad all one-hot text sequences to max input length
129
+ _, ids_sorted_decreasing = torch.sort(
130
+ torch.LongTensor([x[1].size(1) for x in batch]),
131
+ dim=0, descending=True)
132
+
133
+ max_text_len = max([len(x[0]) for x in batch])
134
+ max_spec_len = max([x[1].size(1) for x in batch])
135
+ max_wav_len = max([x[2].size(1) for x in batch])
136
+
137
+ text_lengths = torch.LongTensor(len(batch))
138
+ spec_lengths = torch.LongTensor(len(batch))
139
+ wav_lengths = torch.LongTensor(len(batch))
140
+ sid = torch.LongTensor(len(batch))
141
+
142
+ text_padded = torch.LongTensor(len(batch), max_text_len)
143
+ spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
144
+ wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
145
+ text_padded.zero_()
146
+ spec_padded.zero_()
147
+ wav_padded.zero_()
148
+ for i in range(len(ids_sorted_decreasing)):
149
+ row = batch[ids_sorted_decreasing[i]]
150
+
151
+ text = row[0]
152
+ text_padded[i, :text.size(0)] = text
153
+ text_lengths[i] = text.size(0)
154
+
155
+ spec = row[1]
156
+ spec_padded[i, :, :spec.size(1)] = spec
157
+ spec_lengths[i] = spec.size(1)
158
+
159
+ wav = row[2]
160
+ wav_padded[i, :, :wav.size(1)] = wav
161
+ wav_lengths[i] = wav.size(1)
162
+
163
+ sid[i] = row[3]
164
+
165
+ if self.return_ids:
166
+ return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
167
+ return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
168
+
169
+
170
+ class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
171
+ """
172
+ Maintain similar input lengths in a batch.
173
+ Length groups are specified by boundaries.
174
+ Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
175
+
176
+ It removes samples which are not included in the boundaries.
177
+ Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
178
+ """
179
+
180
+ def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
181
+ super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
182
+ self.lengths = dataset.lengths
183
+ self.batch_size = batch_size
184
+ self.boundaries = boundaries
185
+
186
+ self.buckets, self.num_samples_per_bucket = self._create_buckets()
187
+ self.total_size = sum(self.num_samples_per_bucket)
188
+ self.num_samples = self.total_size // self.num_replicas
189
+
190
+ def _create_buckets(self):
191
+ buckets = [[] for _ in range(len(self.boundaries) - 1)]
192
+ for i in range(len(self.lengths)):
193
+ length = self.lengths[i]
194
+ idx_bucket = self._bisect(length)
195
+ if idx_bucket != -1:
196
+ buckets[idx_bucket].append(i)
197
+
198
+ for i in range(len(buckets) - 1, 0, -1):
199
+ if len(buckets[i]) == 0:
200
+ buckets.pop(i)
201
+ self.boundaries.pop(i + 1)
202
+
203
+ num_samples_per_bucket = []
204
+ for i in range(len(buckets)):
205
+ len_bucket = len(buckets[i])
206
+ total_batch_size = self.num_replicas * self.batch_size
207
+ rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
208
+ num_samples_per_bucket.append(len_bucket + rem)
209
+ return buckets, num_samples_per_bucket
210
+
211
+ def __iter__(self):
212
+ # deterministically shuffle based on epoch
213
+ g = torch.Generator()
214
+ g.manual_seed(self.epoch)
215
+
216
+ indices = []
217
+ if self.shuffle:
218
+ for bucket in self.buckets:
219
+ indices.append(torch.randperm(len(bucket), generator=g).tolist())
220
+ else:
221
+ for bucket in self.buckets:
222
+ indices.append(list(range(len(bucket))))
223
+
224
+ batches = []
225
+ for i in range(len(self.buckets)):
226
+ bucket = self.buckets[i]
227
+ len_bucket = len(bucket)
228
+ ids_bucket = indices[i]
229
+ num_samples_bucket = self.num_samples_per_bucket[i]
230
+
231
+ # add extra samples to make it evenly divisible
232
+ rem = num_samples_bucket - len_bucket
233
+ ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
234
+
235
+ # subsample
236
+ ids_bucket = ids_bucket[self.rank::self.num_replicas]
237
+
238
+ # batching
239
+ for j in range(len(ids_bucket) // self.batch_size):
240
+ batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]]
241
+ batches.append(batch)
242
+
243
+ if self.shuffle:
244
+ batch_ids = torch.randperm(len(batches), generator=g).tolist()
245
+ batches = [batches[i] for i in batch_ids]
246
+ self.batches = batches
247
+
248
+ assert len(self.batches) * self.batch_size == self.num_samples
249
+ return iter(self.batches)
250
+
251
+ def _bisect(self, x, lo=0, hi=None):
252
+ if hi is None:
253
+ hi = len(self.boundaries) - 1
254
+
255
+ if hi > lo:
256
+ mid = (hi + lo) // 2
257
+ if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
258
+ return mid
259
+ elif x <= self.boundaries[mid]:
260
+ return self._bisect(x, lo, mid)
261
+ else:
262
+ return self._bisect(x, mid + 1, hi)
263
+ else:
264
+ return -1
265
+
266
+ def __len__(self):
267
+ return self.num_samples // self.batch_size
denoise_audio.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torchaudio
3
+ raw_audio_dir = "/content/drive/MyDrive/selected_character_wav/"
4
+ denoise_audio_dir = "./denoised_audio/"
5
+ filelist = list(os.walk(raw_audio_dir))[0][2]
6
+
7
+ for file in filelist:
8
+ if file.endswith(".wav"):
9
+ os.system(f"demucs --two-stems=vocals {raw_audio_dir}{file}")
10
+ for file in filelist:
11
+ file = file.replace(".wav", "")
12
+ wav, sr = torchaudio.load(f"./separated/htdemucs/{file}/vocals.wav", frame_offset=0, num_frames=-1, normalize=True,
13
+ channels_first=True)
14
+ # merge two channels into one
15
+ wav = wav.mean(dim=0).unsqueeze(0)
16
+ if sr != 22050:
17
+ wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=22050)(wav)
18
+ torchaudio.save(denoise_audio_dir + file + ".wav", wav, 22050, channels_first=True)
download_model.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from google.colab import files
2
+ files.download("./G_latest.pth")
3
+ files.download("./finetune_speaker.json")
4
+ files.download("./moegoe_config.json")
download_video.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import shutil
4
+ from concurrent.futures import ThreadPoolExecutor
5
+ from google.colab import files
6
+
7
+ basepath = os.getcwd()
8
+ uploaded = files.upload() # 上传文件
9
+ for filename in uploaded.keys():
10
+ assert (filename.endswith(".txt")), "speaker-videolink info could only be .txt file!"
11
+ shutil.move(os.path.join(basepath, filename), os.path.join("./speaker_links.txt"))
12
+
13
+
14
+ def generate_infos():
15
+ infos = []
16
+ with open("./speaker_links.txt", 'r', encoding='utf-8') as f:
17
+ lines = f.readlines()
18
+ for line in lines:
19
+ line = line.replace("\n", "").replace(" ", "")
20
+ if line == "":
21
+ continue
22
+ speaker, link = line.split("|")
23
+ filename = speaker + "_" + str(random.randint(0, 1000000))
24
+ infos.append({"link": link, "filename": filename})
25
+ return infos
26
+
27
+
28
+ def download_video(info):
29
+ link = info["link"]
30
+ filename = info["filename"]
31
+ os.system(f"youtube-dl -f 0 {link} -o ./video_data/{filename}.mp4")
32
+
33
+
34
+ if __name__ == "__main__":
35
+ infos = generate_infos()
36
+ with ThreadPoolExecutor(max_workers=os.cpu_count()) as executor:
37
+ executor.map(download_video, infos)
final_annotation_train.txt ADDED
The diff for this file is too large to render. See raw diff
 
final_annotation_val.txt ADDED
The diff for this file is too large to render. See raw diff
 
finetune_speaker.json ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": {
3
+ "log_interval": 100,
4
+ "eval_interval": 1000,
5
+ "seed": 1234,
6
+ "epochs": 10000,
7
+ "learning_rate": 0.0002,
8
+ "betas": [
9
+ 0.8,
10
+ 0.99
11
+ ],
12
+ "eps": 1e-09,
13
+ "batch_size": 16,
14
+ "fp16_run": true,
15
+ "lr_decay": 0.999875,
16
+ "segment_size": 8192,
17
+ "init_lr_ratio": 1,
18
+ "warmup_epochs": 0,
19
+ "c_mel": 45,
20
+ "c_kl": 1.0
21
+ },
22
+ "data": {
23
+ "training_files": "final_annotation_train.txt",
24
+ "validation_files": "final_annotation_val.txt",
25
+ "text_cleaners": [
26
+ "zh_ja_mixture_cleaners"
27
+ ],
28
+ "max_wav_value": 32768.0,
29
+ "sampling_rate": 22050,
30
+ "filter_length": 1024,
31
+ "hop_length": 256,
32
+ "win_length": 1024,
33
+ "n_mel_channels": 80,
34
+ "mel_fmin": 0.0,
35
+ "mel_fmax": null,
36
+ "add_blank": true,
37
+ "n_speakers": 7,
38
+ "cleaned_text": true
39
+ },
40
+ "model": {
41
+ "inter_channels": 192,
42
+ "hidden_channels": 192,
43
+ "filter_channels": 768,
44
+ "n_heads": 2,
45
+ "n_layers": 6,
46
+ "kernel_size": 3,
47
+ "p_dropout": 0.1,
48
+ "resblock": "1",
49
+ "resblock_kernel_sizes": [
50
+ 3,
51
+ 7,
52
+ 11
53
+ ],
54
+ "resblock_dilation_sizes": [
55
+ [
56
+ 1,
57
+ 3,
58
+ 5
59
+ ],
60
+ [
61
+ 1,
62
+ 3,
63
+ 5
64
+ ],
65
+ [
66
+ 1,
67
+ 3,
68
+ 5
69
+ ]
70
+ ],
71
+ "upsample_rates": [
72
+ 8,
73
+ 8,
74
+ 2,
75
+ 2
76
+ ],
77
+ "upsample_initial_channel": 512,
78
+ "upsample_kernel_sizes": [
79
+ 16,
80
+ 16,
81
+ 4,
82
+ 4
83
+ ],
84
+ "n_layers_q": 3,
85
+ "use_spectral_norm": false,
86
+ "gin_channels": 256
87
+ },
88
+ "speakers": {
89
+ "5": 0,
90
+ "0": 1,
91
+ "1": 2,
92
+ "2": 3,
93
+ "3": 4,
94
+ "4": 5,
95
+ "zhongli": 6
96
+ },
97
+ "symbols": [
98
+ "_",
99
+ ",",
100
+ ".",
101
+ "!",
102
+ "?",
103
+ "-",
104
+ "~",
105
+ "\u2026",
106
+ "A",
107
+ "E",
108
+ "I",
109
+ "N",
110
+ "O",
111
+ "Q",
112
+ "U",
113
+ "a",
114
+ "b",
115
+ "d",
116
+ "e",
117
+ "f",
118
+ "g",
119
+ "h",
120
+ "i",
121
+ "j",
122
+ "k",
123
+ "l",
124
+ "m",
125
+ "n",
126
+ "o",
127
+ "p",
128
+ "r",
129
+ "s",
130
+ "t",
131
+ "u",
132
+ "v",
133
+ "w",
134
+ "y",
135
+ "z",
136
+ "\u0283",
137
+ "\u02a7",
138
+ "\u02a6",
139
+ "\u026f",
140
+ "\u0279",
141
+ "\u0259",
142
+ "\u0265",
143
+ "\u207c",
144
+ "\u02b0",
145
+ "`",
146
+ "\u2192",
147
+ "\u2193",
148
+ "\u2191",
149
+ " "
150
+ ]
151
+ }
finetune_speaker_v2.py ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import argparse
4
+ import itertools
5
+ import math
6
+ import torch
7
+ from torch import nn, optim
8
+ from torch.nn import functional as F
9
+ from torch.utils.data import DataLoader
10
+ from torch.utils.tensorboard import SummaryWriter
11
+ import torch.multiprocessing as mp
12
+ import torch.distributed as dist
13
+ from torch.nn.parallel import DistributedDataParallel as DDP
14
+ from torch.cuda.amp import autocast, GradScaler
15
+ from tqdm import tqdm
16
+
17
+ import librosa
18
+ import logging
19
+
20
+ logging.getLogger('numba').setLevel(logging.WARNING)
21
+
22
+ import commons
23
+ import utils
24
+ from data_utils import (
25
+ TextAudioSpeakerLoader,
26
+ TextAudioSpeakerCollate,
27
+ DistributedBucketSampler
28
+ )
29
+ from models import (
30
+ SynthesizerTrn,
31
+ MultiPeriodDiscriminator,
32
+ )
33
+ from losses import (
34
+ generator_loss,
35
+ discriminator_loss,
36
+ feature_loss,
37
+ kl_loss
38
+ )
39
+ from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
40
+
41
+
42
+ torch.backends.cudnn.benchmark = True
43
+ global_step = 0
44
+
45
+
46
+ def main():
47
+ """Assume Single Node Multi GPUs Training Only"""
48
+ assert torch.cuda.is_available(), "CPU training is not allowed."
49
+
50
+ n_gpus = torch.cuda.device_count()
51
+ os.environ['MASTER_ADDR'] = 'localhost'
52
+ os.environ['MASTER_PORT'] = '8000'
53
+
54
+ hps = utils.get_hparams()
55
+ mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
56
+
57
+
58
+ def run(rank, n_gpus, hps):
59
+ global global_step
60
+ symbols = hps['symbols']
61
+ if rank == 0:
62
+ logger = utils.get_logger(hps.model_dir)
63
+ logger.info(hps)
64
+ utils.check_git_hash(hps.model_dir)
65
+ writer = SummaryWriter(log_dir=hps.model_dir)
66
+ writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
67
+
68
+ # Use gloo backend on Windows for Pytorch
69
+ dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
70
+ torch.manual_seed(hps.train.seed)
71
+ torch.cuda.set_device(rank)
72
+
73
+ train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data, symbols)
74
+ train_sampler = DistributedBucketSampler(
75
+ train_dataset,
76
+ hps.train.batch_size,
77
+ [32,300,400,500,600,700,800,900,1000],
78
+ num_replicas=n_gpus,
79
+ rank=rank,
80
+ shuffle=True)
81
+ collate_fn = TextAudioSpeakerCollate()
82
+ train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True,
83
+ collate_fn=collate_fn, batch_sampler=train_sampler)
84
+ # train_loader = DataLoader(train_dataset, batch_size=hps.train.batch_size, num_workers=2, shuffle=False, pin_memory=True,
85
+ # collate_fn=collate_fn)
86
+ if rank == 0:
87
+ eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data, symbols)
88
+ eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
89
+ batch_size=hps.train.batch_size, pin_memory=True,
90
+ drop_last=False, collate_fn=collate_fn)
91
+
92
+ net_g = SynthesizerTrn(
93
+ len(symbols),
94
+ hps.data.filter_length // 2 + 1,
95
+ hps.train.segment_size // hps.data.hop_length,
96
+ n_speakers=hps.data.n_speakers,
97
+ **hps.model).cuda(rank)
98
+ net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
99
+
100
+ # load existing model
101
+ _, _, _, _ = utils.load_checkpoint("./pretrained_models/G_0.pth", net_g, None, drop_speaker_emb=hps.drop_speaker_embed)
102
+ _, _, _, _ = utils.load_checkpoint("./pretrained_models/D_0.pth", net_d, None)
103
+ # _, _, _, _ = utils.load_checkpoint("./pretrained_models/G_0.pth", net_g, None)
104
+ # _, _, _, _ = utils.load_checkpoint("./pretrained_models/D_0.pth", net_d, None)
105
+ epoch_str = 1
106
+ global_step = 0
107
+ # freeze all other layers except speaker embedding
108
+ for p in net_g.parameters():
109
+ p.requires_grad = True
110
+ for p in net_d.parameters():
111
+ p.requires_grad = True
112
+ # for p in net_d.parameters():
113
+ # p.requires_grad = False
114
+ # net_g.emb_g.weight.requires_grad = True
115
+ optim_g = torch.optim.AdamW(
116
+ net_g.parameters(),
117
+ hps.train.learning_rate,
118
+ betas=hps.train.betas,
119
+ eps=hps.train.eps)
120
+ optim_d = torch.optim.AdamW(
121
+ net_d.parameters(),
122
+ hps.train.learning_rate,
123
+ betas=hps.train.betas,
124
+ eps=hps.train.eps)
125
+ # optim_d = None
126
+ net_g = DDP(net_g, device_ids=[rank])
127
+ net_d = DDP(net_d, device_ids=[rank])
128
+
129
+ scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay)
130
+ scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay)
131
+
132
+ scaler = GradScaler(enabled=hps.train.fp16_run)
133
+
134
+ for epoch in range(epoch_str, hps.train.epochs + 1):
135
+ if rank==0:
136
+ train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
137
+ else:
138
+ train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None)
139
+ scheduler_g.step()
140
+ scheduler_d.step()
141
+
142
+
143
+ def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
144
+ net_g, net_d = nets
145
+ optim_g, optim_d = optims
146
+ scheduler_g, scheduler_d = schedulers
147
+ train_loader, eval_loader = loaders
148
+ if writers is not None:
149
+ writer, writer_eval = writers
150
+
151
+ # train_loader.batch_sampler.set_epoch(epoch)
152
+ global global_step
153
+
154
+ net_g.train()
155
+ net_d.train()
156
+ for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(tqdm(train_loader)):
157
+ x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
158
+ spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
159
+ y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
160
+ speakers = speakers.cuda(rank, non_blocking=True)
161
+
162
+ with autocast(enabled=hps.train.fp16_run):
163
+ y_hat, l_length, attn, ids_slice, x_mask, z_mask,\
164
+ (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers)
165
+
166
+ mel = spec_to_mel_torch(
167
+ spec,
168
+ hps.data.filter_length,
169
+ hps.data.n_mel_channels,
170
+ hps.data.sampling_rate,
171
+ hps.data.mel_fmin,
172
+ hps.data.mel_fmax)
173
+ y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
174
+ y_hat_mel = mel_spectrogram_torch(
175
+ y_hat.squeeze(1),
176
+ hps.data.filter_length,
177
+ hps.data.n_mel_channels,
178
+ hps.data.sampling_rate,
179
+ hps.data.hop_length,
180
+ hps.data.win_length,
181
+ hps.data.mel_fmin,
182
+ hps.data.mel_fmax
183
+ )
184
+
185
+ y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
186
+
187
+ # Discriminator
188
+ y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
189
+ with autocast(enabled=False):
190
+ loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
191
+ loss_disc_all = loss_disc
192
+ optim_d.zero_grad()
193
+ scaler.scale(loss_disc_all).backward()
194
+ scaler.unscale_(optim_d)
195
+ grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
196
+ scaler.step(optim_d)
197
+
198
+ with autocast(enabled=hps.train.fp16_run):
199
+ # Generator
200
+ y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
201
+ with autocast(enabled=False):
202
+ loss_dur = torch.sum(l_length.float())
203
+ loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
204
+ loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
205
+
206
+ loss_fm = feature_loss(fmap_r, fmap_g)
207
+ loss_gen, losses_gen = generator_loss(y_d_hat_g)
208
+ loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
209
+ optim_g.zero_grad()
210
+ scaler.scale(loss_gen_all).backward()
211
+ scaler.unscale_(optim_g)
212
+ grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
213
+ scaler.step(optim_g)
214
+ scaler.update()
215
+
216
+ if rank==0:
217
+ if global_step % hps.train.log_interval == 0:
218
+ lr = optim_g.param_groups[0]['lr']
219
+ losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
220
+ logger.info('Train Epoch: {} [{:.0f}%]'.format(
221
+ epoch,
222
+ 100. * batch_idx / len(train_loader)))
223
+ logger.info([x.item() for x in losses] + [global_step, lr])
224
+
225
+ scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_g": grad_norm_g}
226
+ scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
227
+
228
+ scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
229
+ scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
230
+ scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
231
+ image_dict = {
232
+ "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
233
+ "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
234
+ "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
235
+ "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy())
236
+ }
237
+ utils.summarize(
238
+ writer=writer,
239
+ global_step=global_step,
240
+ images=image_dict,
241
+ scalars=scalar_dict)
242
+
243
+ if global_step % hps.train.eval_interval == 0:
244
+ evaluate(hps, net_g, eval_loader, writer_eval)
245
+ utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
246
+ utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch,
247
+ os.path.join(hps.model_dir, "G_latest.pth".format(global_step)))
248
+ # utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
249
+ old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-4000))
250
+ # old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-400))
251
+ if os.path.exists(old_g):
252
+ os.remove(old_g)
253
+ # if os.path.exists(old_d):
254
+ # os.remove(old_d)
255
+ global_step += 1
256
+ if epoch > hps.max_epochs:
257
+ print("Maximum epoch reached, closing training...")
258
+ exit()
259
+
260
+ if rank == 0:
261
+ logger.info('====> Epoch: {}'.format(epoch))
262
+
263
+
264
+ def evaluate(hps, generator, eval_loader, writer_eval):
265
+ generator.eval()
266
+ with torch.no_grad():
267
+ for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader):
268
+ x, x_lengths = x.cuda(0), x_lengths.cuda(0)
269
+ spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
270
+ y, y_lengths = y.cuda(0), y_lengths.cuda(0)
271
+ speakers = speakers.cuda(0)
272
+
273
+ # remove else
274
+ x = x[:1]
275
+ x_lengths = x_lengths[:1]
276
+ spec = spec[:1]
277
+ spec_lengths = spec_lengths[:1]
278
+ y = y[:1]
279
+ y_lengths = y_lengths[:1]
280
+ speakers = speakers[:1]
281
+ break
282
+ y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000)
283
+ y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length
284
+
285
+ mel = spec_to_mel_torch(
286
+ spec,
287
+ hps.data.filter_length,
288
+ hps.data.n_mel_channels,
289
+ hps.data.sampling_rate,
290
+ hps.data.mel_fmin,
291
+ hps.data.mel_fmax)
292
+ y_hat_mel = mel_spectrogram_torch(
293
+ y_hat.squeeze(1).float(),
294
+ hps.data.filter_length,
295
+ hps.data.n_mel_channels,
296
+ hps.data.sampling_rate,
297
+ hps.data.hop_length,
298
+ hps.data.win_length,
299
+ hps.data.mel_fmin,
300
+ hps.data.mel_fmax
301
+ )
302
+ image_dict = {
303
+ "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
304
+ }
305
+ audio_dict = {
306
+ "gen/audio": y_hat[0,:,:y_hat_lengths[0]]
307
+ }
308
+ if global_step == 0:
309
+ image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
310
+ audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]})
311
+
312
+ utils.summarize(
313
+ writer=writer_eval,
314
+ global_step=global_step,
315
+ images=image_dict,
316
+ audios=audio_dict,
317
+ audio_sampling_rate=hps.data.sampling_rate
318
+ )
319
+ generator.train()
320
+
321
+
322
+ if __name__ == "__main__":
323
+ main()
long_audio_transcribe.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from moviepy.editor import AudioFileClip
2
+ import whisper
3
+ import os
4
+ import torchaudio
5
+ import librosa
6
+ import torch
7
+ import argparse
8
+ parent_dir = "./denoised_audio/"
9
+ filelist = list(os.walk(parent_dir))[0][2]
10
+ if __name__ == "__main__":
11
+ parser = argparse.ArgumentParser()
12
+ parser.add_argument("--languages", default="CJE")
13
+ parser.add_argument("--whisper_size", default="medium")
14
+ args = parser.parse_args()
15
+ if args.languages == "CJE":
16
+ lang2token = {
17
+ 'zh': "[ZH]",
18
+ 'ja': "[JA]",
19
+ "en": "[EN]",
20
+ }
21
+ elif args.languages == "CJ":
22
+ lang2token = {
23
+ 'zh': "[ZH]",
24
+ 'ja': "[JA]",
25
+ }
26
+ elif args.languages == "C":
27
+ lang2token = {
28
+ 'zh': "[ZH]",
29
+ }
30
+ assert(torch.cuda.is_available()), "Please enable GPU in order to run Whisper!"
31
+ model = whisper.load_model(args.whisper_size)
32
+ speaker_annos = []
33
+ for file in filelist:
34
+ print(f"transcribing {parent_dir + file}...\n")
35
+ options = dict(beam_size=5, best_of=5)
36
+ transcribe_options = dict(task="transcribe", **options)
37
+ result = model.transcribe(parent_dir + file, **transcribe_options)
38
+ segments = result["segments"]
39
+ # result = model.transcribe(parent_dir + file)
40
+ lang = result['language']
41
+ if result['language'] not in list(lang2token.keys()):
42
+ print(f"{lang} not supported, ignoring...\n")
43
+ continue
44
+ # segment audio based on segment results
45
+ character_name = file.rstrip(".wav").split("_")[0]
46
+ code = file.rstrip(".wav").split("_")[1] + '_' +file.rstrip(".wav").split("_")[2]
47
+ if not os.path.exists("./segmented_character_voice/" + character_name):
48
+ os.mkdir("./segmented_character_voice/" + character_name)
49
+ wav, sr = torchaudio.load(parent_dir + file, frame_offset=0, num_frames=-1, normalize=True,
50
+ channels_first=True)
51
+
52
+ for i, seg in enumerate(result['segments']):
53
+ start_time = seg['start']
54
+ end_time = seg['end']
55
+ text = seg['text']
56
+ text = lang2token[lang] + text.replace("\n", "") + lang2token[lang]
57
+ text = text + "\n"
58
+ wav_seg = wav[:, int(start_time*sr):int(end_time*sr)]
59
+ wav_seg_name = f"{character_name}_{code}_{i}.wav"
60
+ savepth = "./segmented_character_voice/" + character_name + "/" + wav_seg_name
61
+ speaker_annos.append(savepth + "|" + character_name + "|" + text)
62
+ print(f"Transcribed segment: {speaker_annos[-1]}")
63
+ # trimmed_wav_seg = librosa.effects.trim(wav_seg.squeeze().numpy())
64
+ # trimmed_wav_seg = torch.tensor(trimmed_wav_seg[0]).unsqueeze(0)
65
+ torchaudio.save(savepth, wav_seg, 22050, channels_first=True)
66
+ if len(speaker_annos) == 0:
67
+ print("Warning: no long audios & videos found, this IS expected if you have only uploaded short audios")
68
+ print("this IS NOT expected if you have uploaded any long audios, videos or video links. Please check your file structure or make sure your audio/video language is supported.")
69
+ with open("long_character_anno.txt", 'w', encoding='utf-8') as f:
70
+ for line in speaker_annos:
71
+ f.write(line)
losses.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.nn import functional as F
3
+
4
+ import commons
5
+
6
+
7
+ def feature_loss(fmap_r, fmap_g):
8
+ loss = 0
9
+ for dr, dg in zip(fmap_r, fmap_g):
10
+ for rl, gl in zip(dr, dg):
11
+ rl = rl.float().detach()
12
+ gl = gl.float()
13
+ loss += torch.mean(torch.abs(rl - gl))
14
+
15
+ return loss * 2
16
+
17
+
18
+ def discriminator_loss(disc_real_outputs, disc_generated_outputs):
19
+ loss = 0
20
+ r_losses = []
21
+ g_losses = []
22
+ for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
23
+ dr = dr.float()
24
+ dg = dg.float()
25
+ r_loss = torch.mean((1-dr)**2)
26
+ g_loss = torch.mean(dg**2)
27
+ loss += (r_loss + g_loss)
28
+ r_losses.append(r_loss.item())
29
+ g_losses.append(g_loss.item())
30
+
31
+ return loss, r_losses, g_losses
32
+
33
+
34
+ def generator_loss(disc_outputs):
35
+ loss = 0
36
+ gen_losses = []
37
+ for dg in disc_outputs:
38
+ dg = dg.float()
39
+ l = torch.mean((1-dg)**2)
40
+ gen_losses.append(l)
41
+ loss += l
42
+
43
+ return loss, gen_losses
44
+
45
+
46
+ def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
47
+ """
48
+ z_p, logs_q: [b, h, t_t]
49
+ m_p, logs_p: [b, h, t_t]
50
+ """
51
+ z_p = z_p.float()
52
+ logs_q = logs_q.float()
53
+ m_p = m_p.float()
54
+ logs_p = logs_p.float()
55
+ z_mask = z_mask.float()
56
+
57
+ kl = logs_p - logs_q - 0.5
58
+ kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
59
+ kl = torch.sum(kl * z_mask)
60
+ l = kl / torch.sum(z_mask)
61
+ return l
mel_processing.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import os
3
+ import random
4
+ import torch
5
+ from torch import nn
6
+ import torch.nn.functional as F
7
+ import torch.utils.data
8
+ import numpy as np
9
+ import librosa
10
+ import librosa.util as librosa_util
11
+ from librosa.util import normalize, pad_center, tiny
12
+ from scipy.signal import get_window
13
+ from scipy.io.wavfile import read
14
+ from librosa.filters import mel as librosa_mel_fn
15
+
16
+ MAX_WAV_VALUE = 32768.0
17
+
18
+
19
+ def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
20
+ """
21
+ PARAMS
22
+ ------
23
+ C: compression factor
24
+ """
25
+ return torch.log(torch.clamp(x, min=clip_val) * C)
26
+
27
+
28
+ def dynamic_range_decompression_torch(x, C=1):
29
+ """
30
+ PARAMS
31
+ ------
32
+ C: compression factor used to compress
33
+ """
34
+ return torch.exp(x) / C
35
+
36
+
37
+ def spectral_normalize_torch(magnitudes):
38
+ output = dynamic_range_compression_torch(magnitudes)
39
+ return output
40
+
41
+
42
+ def spectral_de_normalize_torch(magnitudes):
43
+ output = dynamic_range_decompression_torch(magnitudes)
44
+ return output
45
+
46
+
47
+ mel_basis = {}
48
+ hann_window = {}
49
+
50
+
51
+ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
52
+ if torch.min(y) < -1.:
53
+ print('min value is ', torch.min(y))
54
+ if torch.max(y) > 1.:
55
+ print('max value is ', torch.max(y))
56
+
57
+ global hann_window
58
+ dtype_device = str(y.dtype) + '_' + str(y.device)
59
+ wnsize_dtype_device = str(win_size) + '_' + dtype_device
60
+ if wnsize_dtype_device not in hann_window:
61
+ hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
62
+
63
+ y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
64
+ y = y.squeeze(1)
65
+
66
+ spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
67
+ center=center, pad_mode='reflect', normalized=False, onesided=True)
68
+
69
+ spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
70
+ return spec
71
+
72
+
73
+ def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
74
+ global mel_basis
75
+ dtype_device = str(spec.dtype) + '_' + str(spec.device)
76
+ fmax_dtype_device = str(fmax) + '_' + dtype_device
77
+ if fmax_dtype_device not in mel_basis:
78
+ mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
79
+ mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
80
+ spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
81
+ spec = spectral_normalize_torch(spec)
82
+ return spec
83
+
84
+
85
+ def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
86
+ if torch.min(y) < -1.:
87
+ print('min value is ', torch.min(y))
88
+ if torch.max(y) > 1.:
89
+ print('max value is ', torch.max(y))
90
+
91
+ global mel_basis, hann_window
92
+ dtype_device = str(y.dtype) + '_' + str(y.device)
93
+ fmax_dtype_device = str(fmax) + '_' + dtype_device
94
+ wnsize_dtype_device = str(win_size) + '_' + dtype_device
95
+ if fmax_dtype_device not in mel_basis:
96
+ mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
97
+ mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
98
+ if wnsize_dtype_device not in hann_window:
99
+ hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
100
+
101
+ y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
102
+ y = y.squeeze(1)
103
+
104
+ spec = torch.stft(y.float(), n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
105
+ center=center, pad_mode='reflect', normalized=False, onesided=True)
106
+
107
+ spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
108
+
109
+ spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
110
+ spec = spectral_normalize_torch(spec)
111
+
112
+ return spec
models.py ADDED
@@ -0,0 +1,533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import math
3
+ import torch
4
+ from torch import nn
5
+ from torch.nn import functional as F
6
+
7
+ import commons
8
+ import modules
9
+ import attentions
10
+ import monotonic_align
11
+
12
+ from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
13
+ from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
14
+ from commons import init_weights, get_padding
15
+
16
+
17
+ class StochasticDurationPredictor(nn.Module):
18
+ def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
19
+ super().__init__()
20
+ filter_channels = in_channels # it needs to be removed from future version.
21
+ self.in_channels = in_channels
22
+ self.filter_channels = filter_channels
23
+ self.kernel_size = kernel_size
24
+ self.p_dropout = p_dropout
25
+ self.n_flows = n_flows
26
+ self.gin_channels = gin_channels
27
+
28
+ self.log_flow = modules.Log()
29
+ self.flows = nn.ModuleList()
30
+ self.flows.append(modules.ElementwiseAffine(2))
31
+ for i in range(n_flows):
32
+ self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
33
+ self.flows.append(modules.Flip())
34
+
35
+ self.post_pre = nn.Conv1d(1, filter_channels, 1)
36
+ self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
37
+ self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
38
+ self.post_flows = nn.ModuleList()
39
+ self.post_flows.append(modules.ElementwiseAffine(2))
40
+ for i in range(4):
41
+ self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
42
+ self.post_flows.append(modules.Flip())
43
+
44
+ self.pre = nn.Conv1d(in_channels, filter_channels, 1)
45
+ self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
46
+ self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
47
+ if gin_channels != 0:
48
+ self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
49
+
50
+ def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
51
+ x = torch.detach(x)
52
+ x = self.pre(x)
53
+ if g is not None:
54
+ g = torch.detach(g)
55
+ x = x + self.cond(g)
56
+ x = self.convs(x, x_mask)
57
+ x = self.proj(x) * x_mask
58
+
59
+ if not reverse:
60
+ flows = self.flows
61
+ assert w is not None
62
+
63
+ logdet_tot_q = 0
64
+ h_w = self.post_pre(w)
65
+ h_w = self.post_convs(h_w, x_mask)
66
+ h_w = self.post_proj(h_w) * x_mask
67
+ e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
68
+ z_q = e_q
69
+ for flow in self.post_flows:
70
+ z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
71
+ logdet_tot_q += logdet_q
72
+ z_u, z1 = torch.split(z_q, [1, 1], 1)
73
+ u = torch.sigmoid(z_u) * x_mask
74
+ z0 = (w - u) * x_mask
75
+ logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
76
+ logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
77
+
78
+ logdet_tot = 0
79
+ z0, logdet = self.log_flow(z0, x_mask)
80
+ logdet_tot += logdet
81
+ z = torch.cat([z0, z1], 1)
82
+ for flow in flows:
83
+ z, logdet = flow(z, x_mask, g=x, reverse=reverse)
84
+ logdet_tot = logdet_tot + logdet
85
+ nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
86
+ return nll + logq # [b]
87
+ else:
88
+ flows = list(reversed(self.flows))
89
+ flows = flows[:-2] + [flows[-1]] # remove a useless vflow
90
+ z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
91
+ for flow in flows:
92
+ z = flow(z, x_mask, g=x, reverse=reverse)
93
+ z0, z1 = torch.split(z, [1, 1], 1)
94
+ logw = z0
95
+ return logw
96
+
97
+
98
+ class DurationPredictor(nn.Module):
99
+ def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
100
+ super().__init__()
101
+
102
+ self.in_channels = in_channels
103
+ self.filter_channels = filter_channels
104
+ self.kernel_size = kernel_size
105
+ self.p_dropout = p_dropout
106
+ self.gin_channels = gin_channels
107
+
108
+ self.drop = nn.Dropout(p_dropout)
109
+ self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
110
+ self.norm_1 = modules.LayerNorm(filter_channels)
111
+ self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
112
+ self.norm_2 = modules.LayerNorm(filter_channels)
113
+ self.proj = nn.Conv1d(filter_channels, 1, 1)
114
+
115
+ if gin_channels != 0:
116
+ self.cond = nn.Conv1d(gin_channels, in_channels, 1)
117
+
118
+ def forward(self, x, x_mask, g=None):
119
+ x = torch.detach(x)
120
+ if g is not None:
121
+ g = torch.detach(g)
122
+ x = x + self.cond(g)
123
+ x = self.conv_1(x * x_mask)
124
+ x = torch.relu(x)
125
+ x = self.norm_1(x)
126
+ x = self.drop(x)
127
+ x = self.conv_2(x * x_mask)
128
+ x = torch.relu(x)
129
+ x = self.norm_2(x)
130
+ x = self.drop(x)
131
+ x = self.proj(x * x_mask)
132
+ return x * x_mask
133
+
134
+
135
+ class TextEncoder(nn.Module):
136
+ def __init__(self,
137
+ n_vocab,
138
+ out_channels,
139
+ hidden_channels,
140
+ filter_channels,
141
+ n_heads,
142
+ n_layers,
143
+ kernel_size,
144
+ p_dropout):
145
+ super().__init__()
146
+ self.n_vocab = n_vocab
147
+ self.out_channels = out_channels
148
+ self.hidden_channels = hidden_channels
149
+ self.filter_channels = filter_channels
150
+ self.n_heads = n_heads
151
+ self.n_layers = n_layers
152
+ self.kernel_size = kernel_size
153
+ self.p_dropout = p_dropout
154
+
155
+ self.emb = nn.Embedding(n_vocab, hidden_channels)
156
+ nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
157
+
158
+ self.encoder = attentions.Encoder(
159
+ hidden_channels,
160
+ filter_channels,
161
+ n_heads,
162
+ n_layers,
163
+ kernel_size,
164
+ p_dropout)
165
+ self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
166
+
167
+ def forward(self, x, x_lengths):
168
+ x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
169
+ x = torch.transpose(x, 1, -1) # [b, h, t]
170
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
171
+
172
+ x = self.encoder(x * x_mask, x_mask)
173
+ stats = self.proj(x) * x_mask
174
+
175
+ m, logs = torch.split(stats, self.out_channels, dim=1)
176
+ return x, m, logs, x_mask
177
+
178
+
179
+ class ResidualCouplingBlock(nn.Module):
180
+ def __init__(self,
181
+ channels,
182
+ hidden_channels,
183
+ kernel_size,
184
+ dilation_rate,
185
+ n_layers,
186
+ n_flows=4,
187
+ gin_channels=0):
188
+ super().__init__()
189
+ self.channels = channels
190
+ self.hidden_channels = hidden_channels
191
+ self.kernel_size = kernel_size
192
+ self.dilation_rate = dilation_rate
193
+ self.n_layers = n_layers
194
+ self.n_flows = n_flows
195
+ self.gin_channels = gin_channels
196
+
197
+ self.flows = nn.ModuleList()
198
+ for i in range(n_flows):
199
+ self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
200
+ self.flows.append(modules.Flip())
201
+
202
+ def forward(self, x, x_mask, g=None, reverse=False):
203
+ if not reverse:
204
+ for flow in self.flows:
205
+ x, _ = flow(x, x_mask, g=g, reverse=reverse)
206
+ else:
207
+ for flow in reversed(self.flows):
208
+ x = flow(x, x_mask, g=g, reverse=reverse)
209
+ return x
210
+
211
+
212
+ class PosteriorEncoder(nn.Module):
213
+ def __init__(self,
214
+ in_channels,
215
+ out_channels,
216
+ hidden_channels,
217
+ kernel_size,
218
+ dilation_rate,
219
+ n_layers,
220
+ gin_channels=0):
221
+ super().__init__()
222
+ self.in_channels = in_channels
223
+ self.out_channels = out_channels
224
+ self.hidden_channels = hidden_channels
225
+ self.kernel_size = kernel_size
226
+ self.dilation_rate = dilation_rate
227
+ self.n_layers = n_layers
228
+ self.gin_channels = gin_channels
229
+
230
+ self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
231
+ self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
232
+ self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
233
+
234
+ def forward(self, x, x_lengths, g=None):
235
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
236
+ x = self.pre(x) * x_mask
237
+ x = self.enc(x, x_mask, g=g)
238
+ stats = self.proj(x) * x_mask
239
+ m, logs = torch.split(stats, self.out_channels, dim=1)
240
+ z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
241
+ return z, m, logs, x_mask
242
+
243
+
244
+ class Generator(torch.nn.Module):
245
+ def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
246
+ super(Generator, self).__init__()
247
+ self.num_kernels = len(resblock_kernel_sizes)
248
+ self.num_upsamples = len(upsample_rates)
249
+ self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
250
+ resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
251
+
252
+ self.ups = nn.ModuleList()
253
+ for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
254
+ self.ups.append(weight_norm(
255
+ ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
256
+ k, u, padding=(k-u)//2)))
257
+
258
+ self.resblocks = nn.ModuleList()
259
+ for i in range(len(self.ups)):
260
+ ch = upsample_initial_channel//(2**(i+1))
261
+ for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
262
+ self.resblocks.append(resblock(ch, k, d))
263
+
264
+ self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
265
+ self.ups.apply(init_weights)
266
+
267
+ if gin_channels != 0:
268
+ self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
269
+
270
+ def forward(self, x, g=None):
271
+ x = self.conv_pre(x)
272
+ if g is not None:
273
+ x = x + self.cond(g)
274
+
275
+ for i in range(self.num_upsamples):
276
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
277
+ x = self.ups[i](x)
278
+ xs = None
279
+ for j in range(self.num_kernels):
280
+ if xs is None:
281
+ xs = self.resblocks[i*self.num_kernels+j](x)
282
+ else:
283
+ xs += self.resblocks[i*self.num_kernels+j](x)
284
+ x = xs / self.num_kernels
285
+ x = F.leaky_relu(x)
286
+ x = self.conv_post(x)
287
+ x = torch.tanh(x)
288
+
289
+ return x
290
+
291
+ def remove_weight_norm(self):
292
+ print('Removing weight norm...')
293
+ for l in self.ups:
294
+ remove_weight_norm(l)
295
+ for l in self.resblocks:
296
+ l.remove_weight_norm()
297
+
298
+
299
+ class DiscriminatorP(torch.nn.Module):
300
+ def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
301
+ super(DiscriminatorP, self).__init__()
302
+ self.period = period
303
+ self.use_spectral_norm = use_spectral_norm
304
+ norm_f = weight_norm if use_spectral_norm == False else spectral_norm
305
+ self.convs = nn.ModuleList([
306
+ norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
307
+ norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
308
+ norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
309
+ norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
310
+ norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
311
+ ])
312
+ self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
313
+
314
+ def forward(self, x):
315
+ fmap = []
316
+
317
+ # 1d to 2d
318
+ b, c, t = x.shape
319
+ if t % self.period != 0: # pad first
320
+ n_pad = self.period - (t % self.period)
321
+ x = F.pad(x, (0, n_pad), "reflect")
322
+ t = t + n_pad
323
+ x = x.view(b, c, t // self.period, self.period)
324
+
325
+ for l in self.convs:
326
+ x = l(x)
327
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
328
+ fmap.append(x)
329
+ x = self.conv_post(x)
330
+ fmap.append(x)
331
+ x = torch.flatten(x, 1, -1)
332
+
333
+ return x, fmap
334
+
335
+
336
+ class DiscriminatorS(torch.nn.Module):
337
+ def __init__(self, use_spectral_norm=False):
338
+ super(DiscriminatorS, self).__init__()
339
+ norm_f = weight_norm if use_spectral_norm == False else spectral_norm
340
+ self.convs = nn.ModuleList([
341
+ norm_f(Conv1d(1, 16, 15, 1, padding=7)),
342
+ norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
343
+ norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
344
+ norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
345
+ norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
346
+ norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
347
+ ])
348
+ self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
349
+
350
+ def forward(self, x):
351
+ fmap = []
352
+
353
+ for l in self.convs:
354
+ x = l(x)
355
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
356
+ fmap.append(x)
357
+ x = self.conv_post(x)
358
+ fmap.append(x)
359
+ x = torch.flatten(x, 1, -1)
360
+
361
+ return x, fmap
362
+
363
+
364
+ class MultiPeriodDiscriminator(torch.nn.Module):
365
+ def __init__(self, use_spectral_norm=False):
366
+ super(MultiPeriodDiscriminator, self).__init__()
367
+ periods = [2,3,5,7,11]
368
+
369
+ discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
370
+ discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
371
+ self.discriminators = nn.ModuleList(discs)
372
+
373
+ def forward(self, y, y_hat):
374
+ y_d_rs = []
375
+ y_d_gs = []
376
+ fmap_rs = []
377
+ fmap_gs = []
378
+ for i, d in enumerate(self.discriminators):
379
+ y_d_r, fmap_r = d(y)
380
+ y_d_g, fmap_g = d(y_hat)
381
+ y_d_rs.append(y_d_r)
382
+ y_d_gs.append(y_d_g)
383
+ fmap_rs.append(fmap_r)
384
+ fmap_gs.append(fmap_g)
385
+
386
+ return y_d_rs, y_d_gs, fmap_rs, fmap_gs
387
+
388
+
389
+
390
+ class SynthesizerTrn(nn.Module):
391
+ """
392
+ Synthesizer for Training
393
+ """
394
+
395
+ def __init__(self,
396
+ n_vocab,
397
+ spec_channels,
398
+ segment_size,
399
+ inter_channels,
400
+ hidden_channels,
401
+ filter_channels,
402
+ n_heads,
403
+ n_layers,
404
+ kernel_size,
405
+ p_dropout,
406
+ resblock,
407
+ resblock_kernel_sizes,
408
+ resblock_dilation_sizes,
409
+ upsample_rates,
410
+ upsample_initial_channel,
411
+ upsample_kernel_sizes,
412
+ n_speakers=0,
413
+ gin_channels=0,
414
+ use_sdp=True,
415
+ **kwargs):
416
+
417
+ super().__init__()
418
+ self.n_vocab = n_vocab
419
+ self.spec_channels = spec_channels
420
+ self.inter_channels = inter_channels
421
+ self.hidden_channels = hidden_channels
422
+ self.filter_channels = filter_channels
423
+ self.n_heads = n_heads
424
+ self.n_layers = n_layers
425
+ self.kernel_size = kernel_size
426
+ self.p_dropout = p_dropout
427
+ self.resblock = resblock
428
+ self.resblock_kernel_sizes = resblock_kernel_sizes
429
+ self.resblock_dilation_sizes = resblock_dilation_sizes
430
+ self.upsample_rates = upsample_rates
431
+ self.upsample_initial_channel = upsample_initial_channel
432
+ self.upsample_kernel_sizes = upsample_kernel_sizes
433
+ self.segment_size = segment_size
434
+ self.n_speakers = n_speakers
435
+ self.gin_channels = gin_channels
436
+
437
+ self.use_sdp = use_sdp
438
+
439
+ self.enc_p = TextEncoder(n_vocab,
440
+ inter_channels,
441
+ hidden_channels,
442
+ filter_channels,
443
+ n_heads,
444
+ n_layers,
445
+ kernel_size,
446
+ p_dropout)
447
+ self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
448
+ self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
449
+ self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
450
+
451
+ if use_sdp:
452
+ self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
453
+ else:
454
+ self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
455
+
456
+ if n_speakers >= 1:
457
+ self.emb_g = nn.Embedding(n_speakers, gin_channels)
458
+
459
+ def forward(self, x, x_lengths, y, y_lengths, sid=None):
460
+
461
+ x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
462
+ if self.n_speakers > 0:
463
+ g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
464
+ else:
465
+ g = None
466
+
467
+ z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
468
+ z_p = self.flow(z, y_mask, g=g)
469
+
470
+ with torch.no_grad():
471
+ # negative cross-entropy
472
+ s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
473
+ neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
474
+ neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
475
+ neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
476
+ neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
477
+ neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
478
+
479
+ attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
480
+ attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
481
+
482
+ w = attn.sum(2)
483
+ if self.use_sdp:
484
+ l_length = self.dp(x, x_mask, w, g=g)
485
+ l_length = l_length / torch.sum(x_mask)
486
+ else:
487
+ logw_ = torch.log(w + 1e-6) * x_mask
488
+ logw = self.dp(x, x_mask, g=g)
489
+ l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
490
+
491
+ # expand prior
492
+ m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
493
+ logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
494
+
495
+ z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
496
+ o = self.dec(z_slice, g=g)
497
+ return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
498
+
499
+ def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
500
+ x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
501
+ if self.n_speakers > 0:
502
+ g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
503
+ else:
504
+ g = None
505
+
506
+ if self.use_sdp:
507
+ logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
508
+ else:
509
+ logw = self.dp(x, x_mask, g=g)
510
+ w = torch.exp(logw) * x_mask * length_scale
511
+ w_ceil = torch.ceil(w)
512
+ y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
513
+ y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
514
+ attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
515
+ attn = commons.generate_path(w_ceil, attn_mask)
516
+
517
+ m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
518
+ logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
519
+
520
+ z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
521
+ z = self.flow(z_p, y_mask, g=g, reverse=True)
522
+ o = self.dec((z * y_mask)[:,:,:max_len], g=g)
523
+ return o, attn, y_mask, (z, z_p, m_p, logs_p)
524
+
525
+ def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
526
+ assert self.n_speakers > 0, "n_speakers have to be larger than 0."
527
+ g_src = self.emb_g(sid_src).unsqueeze(-1)
528
+ g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
529
+ z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
530
+ z_p = self.flow(z, y_mask, g=g_src)
531
+ z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
532
+ o_hat = self.dec(z_hat * y_mask, g=g_tgt)
533
+ return o_hat, y_mask, (z, z_p, z_hat)
models_infer.py ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ from torch import nn
4
+ from torch.nn import functional as F
5
+
6
+ import commons
7
+ import modules
8
+ import attentions
9
+
10
+ from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
11
+ from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
12
+ from commons import init_weights, get_padding
13
+
14
+
15
+ class StochasticDurationPredictor(nn.Module):
16
+ def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
17
+ super().__init__()
18
+ filter_channels = in_channels # it needs to be removed from future version.
19
+ self.in_channels = in_channels
20
+ self.filter_channels = filter_channels
21
+ self.kernel_size = kernel_size
22
+ self.p_dropout = p_dropout
23
+ self.n_flows = n_flows
24
+ self.gin_channels = gin_channels
25
+
26
+ self.log_flow = modules.Log()
27
+ self.flows = nn.ModuleList()
28
+ self.flows.append(modules.ElementwiseAffine(2))
29
+ for i in range(n_flows):
30
+ self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
31
+ self.flows.append(modules.Flip())
32
+
33
+ self.post_pre = nn.Conv1d(1, filter_channels, 1)
34
+ self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
35
+ self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
36
+ self.post_flows = nn.ModuleList()
37
+ self.post_flows.append(modules.ElementwiseAffine(2))
38
+ for i in range(4):
39
+ self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
40
+ self.post_flows.append(modules.Flip())
41
+
42
+ self.pre = nn.Conv1d(in_channels, filter_channels, 1)
43
+ self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
44
+ self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
45
+ if gin_channels != 0:
46
+ self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
47
+
48
+ def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
49
+ x = torch.detach(x)
50
+ x = self.pre(x)
51
+ if g is not None:
52
+ g = torch.detach(g)
53
+ x = x + self.cond(g)
54
+ x = self.convs(x, x_mask)
55
+ x = self.proj(x) * x_mask
56
+
57
+ if not reverse:
58
+ flows = self.flows
59
+ assert w is not None
60
+
61
+ logdet_tot_q = 0
62
+ h_w = self.post_pre(w)
63
+ h_w = self.post_convs(h_w, x_mask)
64
+ h_w = self.post_proj(h_w) * x_mask
65
+ e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
66
+ z_q = e_q
67
+ for flow in self.post_flows:
68
+ z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
69
+ logdet_tot_q += logdet_q
70
+ z_u, z1 = torch.split(z_q, [1, 1], 1)
71
+ u = torch.sigmoid(z_u) * x_mask
72
+ z0 = (w - u) * x_mask
73
+ logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
74
+ logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
75
+
76
+ logdet_tot = 0
77
+ z0, logdet = self.log_flow(z0, x_mask)
78
+ logdet_tot += logdet
79
+ z = torch.cat([z0, z1], 1)
80
+ for flow in flows:
81
+ z, logdet = flow(z, x_mask, g=x, reverse=reverse)
82
+ logdet_tot = logdet_tot + logdet
83
+ nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
84
+ return nll + logq # [b]
85
+ else:
86
+ flows = list(reversed(self.flows))
87
+ flows = flows[:-2] + [flows[-1]] # remove a useless vflow
88
+ z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
89
+ for flow in flows:
90
+ z = flow(z, x_mask, g=x, reverse=reverse)
91
+ z0, z1 = torch.split(z, [1, 1], 1)
92
+ logw = z0
93
+ return logw
94
+
95
+
96
+ class DurationPredictor(nn.Module):
97
+ def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
98
+ super().__init__()
99
+
100
+ self.in_channels = in_channels
101
+ self.filter_channels = filter_channels
102
+ self.kernel_size = kernel_size
103
+ self.p_dropout = p_dropout
104
+ self.gin_channels = gin_channels
105
+
106
+ self.drop = nn.Dropout(p_dropout)
107
+ self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
108
+ self.norm_1 = modules.LayerNorm(filter_channels)
109
+ self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
110
+ self.norm_2 = modules.LayerNorm(filter_channels)
111
+ self.proj = nn.Conv1d(filter_channels, 1, 1)
112
+
113
+ if gin_channels != 0:
114
+ self.cond = nn.Conv1d(gin_channels, in_channels, 1)
115
+
116
+ def forward(self, x, x_mask, g=None):
117
+ x = torch.detach(x)
118
+ if g is not None:
119
+ g = torch.detach(g)
120
+ x = x + self.cond(g)
121
+ x = self.conv_1(x * x_mask)
122
+ x = torch.relu(x)
123
+ x = self.norm_1(x)
124
+ x = self.drop(x)
125
+ x = self.conv_2(x * x_mask)
126
+ x = torch.relu(x)
127
+ x = self.norm_2(x)
128
+ x = self.drop(x)
129
+ x = self.proj(x * x_mask)
130
+ return x * x_mask
131
+
132
+
133
+ class TextEncoder(nn.Module):
134
+ def __init__(self,
135
+ n_vocab,
136
+ out_channels,
137
+ hidden_channels,
138
+ filter_channels,
139
+ n_heads,
140
+ n_layers,
141
+ kernel_size,
142
+ p_dropout):
143
+ super().__init__()
144
+ self.n_vocab = n_vocab
145
+ self.out_channels = out_channels
146
+ self.hidden_channels = hidden_channels
147
+ self.filter_channels = filter_channels
148
+ self.n_heads = n_heads
149
+ self.n_layers = n_layers
150
+ self.kernel_size = kernel_size
151
+ self.p_dropout = p_dropout
152
+
153
+ self.emb = nn.Embedding(n_vocab, hidden_channels)
154
+ nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
155
+
156
+ self.encoder = attentions.Encoder(
157
+ hidden_channels,
158
+ filter_channels,
159
+ n_heads,
160
+ n_layers,
161
+ kernel_size,
162
+ p_dropout)
163
+ self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
164
+
165
+ def forward(self, x, x_lengths):
166
+ x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
167
+ x = torch.transpose(x, 1, -1) # [b, h, t]
168
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
169
+
170
+ x = self.encoder(x * x_mask, x_mask)
171
+ stats = self.proj(x) * x_mask
172
+
173
+ m, logs = torch.split(stats, self.out_channels, dim=1)
174
+ return x, m, logs, x_mask
175
+
176
+
177
+ class ResidualCouplingBlock(nn.Module):
178
+ def __init__(self,
179
+ channels,
180
+ hidden_channels,
181
+ kernel_size,
182
+ dilation_rate,
183
+ n_layers,
184
+ n_flows=4,
185
+ gin_channels=0):
186
+ super().__init__()
187
+ self.channels = channels
188
+ self.hidden_channels = hidden_channels
189
+ self.kernel_size = kernel_size
190
+ self.dilation_rate = dilation_rate
191
+ self.n_layers = n_layers
192
+ self.n_flows = n_flows
193
+ self.gin_channels = gin_channels
194
+
195
+ self.flows = nn.ModuleList()
196
+ for i in range(n_flows):
197
+ self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
198
+ self.flows.append(modules.Flip())
199
+
200
+ def forward(self, x, x_mask, g=None, reverse=False):
201
+ if not reverse:
202
+ for flow in self.flows:
203
+ x, _ = flow(x, x_mask, g=g, reverse=reverse)
204
+ else:
205
+ for flow in reversed(self.flows):
206
+ x = flow(x, x_mask, g=g, reverse=reverse)
207
+ return x
208
+
209
+
210
+ class PosteriorEncoder(nn.Module):
211
+ def __init__(self,
212
+ in_channels,
213
+ out_channels,
214
+ hidden_channels,
215
+ kernel_size,
216
+ dilation_rate,
217
+ n_layers,
218
+ gin_channels=0):
219
+ super().__init__()
220
+ self.in_channels = in_channels
221
+ self.out_channels = out_channels
222
+ self.hidden_channels = hidden_channels
223
+ self.kernel_size = kernel_size
224
+ self.dilation_rate = dilation_rate
225
+ self.n_layers = n_layers
226
+ self.gin_channels = gin_channels
227
+
228
+ self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
229
+ self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
230
+ self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
231
+
232
+ def forward(self, x, x_lengths, g=None):
233
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
234
+ x = self.pre(x) * x_mask
235
+ x = self.enc(x, x_mask, g=g)
236
+ stats = self.proj(x) * x_mask
237
+ m, logs = torch.split(stats, self.out_channels, dim=1)
238
+ z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
239
+ return z, m, logs, x_mask
240
+
241
+
242
+ class Generator(torch.nn.Module):
243
+ def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
244
+ super(Generator, self).__init__()
245
+ self.num_kernels = len(resblock_kernel_sizes)
246
+ self.num_upsamples = len(upsample_rates)
247
+ self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
248
+ resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
249
+
250
+ self.ups = nn.ModuleList()
251
+ for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
252
+ self.ups.append(weight_norm(
253
+ ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
254
+ k, u, padding=(k-u)//2)))
255
+
256
+ self.resblocks = nn.ModuleList()
257
+ for i in range(len(self.ups)):
258
+ ch = upsample_initial_channel//(2**(i+1))
259
+ for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
260
+ self.resblocks.append(resblock(ch, k, d))
261
+
262
+ self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
263
+ self.ups.apply(init_weights)
264
+
265
+ if gin_channels != 0:
266
+ self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
267
+
268
+ def forward(self, x, g=None):
269
+ x = self.conv_pre(x)
270
+ if g is not None:
271
+ x = x + self.cond(g)
272
+
273
+ for i in range(self.num_upsamples):
274
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
275
+ x = self.ups[i](x)
276
+ xs = None
277
+ for j in range(self.num_kernels):
278
+ if xs is None:
279
+ xs = self.resblocks[i*self.num_kernels+j](x)
280
+ else:
281
+ xs += self.resblocks[i*self.num_kernels+j](x)
282
+ x = xs / self.num_kernels
283
+ x = F.leaky_relu(x)
284
+ x = self.conv_post(x)
285
+ x = torch.tanh(x)
286
+
287
+ return x
288
+
289
+ def remove_weight_norm(self):
290
+ print('Removing weight norm...')
291
+ for l in self.ups:
292
+ remove_weight_norm(l)
293
+ for l in self.resblocks:
294
+ l.remove_weight_norm()
295
+
296
+
297
+
298
+ class SynthesizerTrn(nn.Module):
299
+ """
300
+ Synthesizer for Training
301
+ """
302
+
303
+ def __init__(self,
304
+ n_vocab,
305
+ spec_channels,
306
+ segment_size,
307
+ inter_channels,
308
+ hidden_channels,
309
+ filter_channels,
310
+ n_heads,
311
+ n_layers,
312
+ kernel_size,
313
+ p_dropout,
314
+ resblock,
315
+ resblock_kernel_sizes,
316
+ resblock_dilation_sizes,
317
+ upsample_rates,
318
+ upsample_initial_channel,
319
+ upsample_kernel_sizes,
320
+ n_speakers=0,
321
+ gin_channels=0,
322
+ use_sdp=True,
323
+ **kwargs):
324
+
325
+ super().__init__()
326
+ self.n_vocab = n_vocab
327
+ self.spec_channels = spec_channels
328
+ self.inter_channels = inter_channels
329
+ self.hidden_channels = hidden_channels
330
+ self.filter_channels = filter_channels
331
+ self.n_heads = n_heads
332
+ self.n_layers = n_layers
333
+ self.kernel_size = kernel_size
334
+ self.p_dropout = p_dropout
335
+ self.resblock = resblock
336
+ self.resblock_kernel_sizes = resblock_kernel_sizes
337
+ self.resblock_dilation_sizes = resblock_dilation_sizes
338
+ self.upsample_rates = upsample_rates
339
+ self.upsample_initial_channel = upsample_initial_channel
340
+ self.upsample_kernel_sizes = upsample_kernel_sizes
341
+ self.segment_size = segment_size
342
+ self.n_speakers = n_speakers
343
+ self.gin_channels = gin_channels
344
+
345
+ self.use_sdp = use_sdp
346
+
347
+ self.enc_p = TextEncoder(n_vocab,
348
+ inter_channels,
349
+ hidden_channels,
350
+ filter_channels,
351
+ n_heads,
352
+ n_layers,
353
+ kernel_size,
354
+ p_dropout)
355
+ self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
356
+ self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
357
+ self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
358
+
359
+ if use_sdp:
360
+ self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
361
+ else:
362
+ self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
363
+
364
+ if n_speakers > 1:
365
+ self.emb_g = nn.Embedding(n_speakers, gin_channels)
366
+
367
+ def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
368
+ x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
369
+ if self.n_speakers > 0:
370
+ g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
371
+ else:
372
+ g = None
373
+
374
+ if self.use_sdp:
375
+ logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
376
+ else:
377
+ logw = self.dp(x, x_mask, g=g)
378
+ w = torch.exp(logw) * x_mask * length_scale
379
+ w_ceil = torch.ceil(w)
380
+ y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
381
+ y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
382
+ attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
383
+ attn = commons.generate_path(w_ceil, attn_mask)
384
+
385
+ m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
386
+ logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
387
+
388
+ z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
389
+ z = self.flow(z_p, y_mask, g=g, reverse=True)
390
+ o = self.dec((z * y_mask)[:,:,:max_len], g=g)
391
+ return o, attn, y_mask, (z, z_p, m_p, logs_p)
392
+
393
+ def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
394
+ assert self.n_speakers > 0, "n_speakers have to be larger than 0."
395
+ g_src = self.emb_g(sid_src).unsqueeze(-1)
396
+ g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
397
+ z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
398
+ z_p = self.flow(z, y_mask, g=g_src)
399
+ z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
400
+ o_hat = self.dec(z_hat * y_mask, g=g_tgt)
401
+ return o_hat, y_mask, (z, z_p, z_hat)
402
+
modules.py ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import math
3
+ import numpy as np
4
+ import scipy
5
+ import torch
6
+ from torch import nn
7
+ from torch.nn import functional as F
8
+
9
+ from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
10
+ from torch.nn.utils import weight_norm, remove_weight_norm
11
+
12
+ import commons
13
+ from commons import init_weights, get_padding
14
+ from transforms import piecewise_rational_quadratic_transform
15
+
16
+
17
+ LRELU_SLOPE = 0.1
18
+
19
+
20
+ class LayerNorm(nn.Module):
21
+ def __init__(self, channels, eps=1e-5):
22
+ super().__init__()
23
+ self.channels = channels
24
+ self.eps = eps
25
+
26
+ self.gamma = nn.Parameter(torch.ones(channels))
27
+ self.beta = nn.Parameter(torch.zeros(channels))
28
+
29
+ def forward(self, x):
30
+ x = x.transpose(1, -1)
31
+ x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
32
+ return x.transpose(1, -1)
33
+
34
+
35
+ class ConvReluNorm(nn.Module):
36
+ def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
37
+ super().__init__()
38
+ self.in_channels = in_channels
39
+ self.hidden_channels = hidden_channels
40
+ self.out_channels = out_channels
41
+ self.kernel_size = kernel_size
42
+ self.n_layers = n_layers
43
+ self.p_dropout = p_dropout
44
+ assert n_layers > 1, "Number of layers should be larger than 0."
45
+
46
+ self.conv_layers = nn.ModuleList()
47
+ self.norm_layers = nn.ModuleList()
48
+ self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
49
+ self.norm_layers.append(LayerNorm(hidden_channels))
50
+ self.relu_drop = nn.Sequential(
51
+ nn.ReLU(),
52
+ nn.Dropout(p_dropout))
53
+ for _ in range(n_layers-1):
54
+ self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
55
+ self.norm_layers.append(LayerNorm(hidden_channels))
56
+ self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
57
+ self.proj.weight.data.zero_()
58
+ self.proj.bias.data.zero_()
59
+
60
+ def forward(self, x, x_mask):
61
+ x_org = x
62
+ for i in range(self.n_layers):
63
+ x = self.conv_layers[i](x * x_mask)
64
+ x = self.norm_layers[i](x)
65
+ x = self.relu_drop(x)
66
+ x = x_org + self.proj(x)
67
+ return x * x_mask
68
+
69
+
70
+ class DDSConv(nn.Module):
71
+ """
72
+ Dialted and Depth-Separable Convolution
73
+ """
74
+ def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
75
+ super().__init__()
76
+ self.channels = channels
77
+ self.kernel_size = kernel_size
78
+ self.n_layers = n_layers
79
+ self.p_dropout = p_dropout
80
+
81
+ self.drop = nn.Dropout(p_dropout)
82
+ self.convs_sep = nn.ModuleList()
83
+ self.convs_1x1 = nn.ModuleList()
84
+ self.norms_1 = nn.ModuleList()
85
+ self.norms_2 = nn.ModuleList()
86
+ for i in range(n_layers):
87
+ dilation = kernel_size ** i
88
+ padding = (kernel_size * dilation - dilation) // 2
89
+ self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
90
+ groups=channels, dilation=dilation, padding=padding
91
+ ))
92
+ self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
93
+ self.norms_1.append(LayerNorm(channels))
94
+ self.norms_2.append(LayerNorm(channels))
95
+
96
+ def forward(self, x, x_mask, g=None):
97
+ if g is not None:
98
+ x = x + g
99
+ for i in range(self.n_layers):
100
+ y = self.convs_sep[i](x * x_mask)
101
+ y = self.norms_1[i](y)
102
+ y = F.gelu(y)
103
+ y = self.convs_1x1[i](y)
104
+ y = self.norms_2[i](y)
105
+ y = F.gelu(y)
106
+ y = self.drop(y)
107
+ x = x + y
108
+ return x * x_mask
109
+
110
+
111
+ class WN(torch.nn.Module):
112
+ def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
113
+ super(WN, self).__init__()
114
+ assert(kernel_size % 2 == 1)
115
+ self.hidden_channels =hidden_channels
116
+ self.kernel_size = kernel_size,
117
+ self.dilation_rate = dilation_rate
118
+ self.n_layers = n_layers
119
+ self.gin_channels = gin_channels
120
+ self.p_dropout = p_dropout
121
+
122
+ self.in_layers = torch.nn.ModuleList()
123
+ self.res_skip_layers = torch.nn.ModuleList()
124
+ self.drop = nn.Dropout(p_dropout)
125
+
126
+ if gin_channels != 0:
127
+ cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
128
+ self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
129
+
130
+ for i in range(n_layers):
131
+ dilation = dilation_rate ** i
132
+ padding = int((kernel_size * dilation - dilation) / 2)
133
+ in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
134
+ dilation=dilation, padding=padding)
135
+ in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
136
+ self.in_layers.append(in_layer)
137
+
138
+ # last one is not necessary
139
+ if i < n_layers - 1:
140
+ res_skip_channels = 2 * hidden_channels
141
+ else:
142
+ res_skip_channels = hidden_channels
143
+
144
+ res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
145
+ res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
146
+ self.res_skip_layers.append(res_skip_layer)
147
+
148
+ def forward(self, x, x_mask, g=None, **kwargs):
149
+ output = torch.zeros_like(x)
150
+ n_channels_tensor = torch.IntTensor([self.hidden_channels])
151
+
152
+ if g is not None:
153
+ g = self.cond_layer(g)
154
+
155
+ for i in range(self.n_layers):
156
+ x_in = self.in_layers[i](x)
157
+ if g is not None:
158
+ cond_offset = i * 2 * self.hidden_channels
159
+ g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
160
+ else:
161
+ g_l = torch.zeros_like(x_in)
162
+
163
+ acts = commons.fused_add_tanh_sigmoid_multiply(
164
+ x_in,
165
+ g_l,
166
+ n_channels_tensor)
167
+ acts = self.drop(acts)
168
+
169
+ res_skip_acts = self.res_skip_layers[i](acts)
170
+ if i < self.n_layers - 1:
171
+ res_acts = res_skip_acts[:,:self.hidden_channels,:]
172
+ x = (x + res_acts) * x_mask
173
+ output = output + res_skip_acts[:,self.hidden_channels:,:]
174
+ else:
175
+ output = output + res_skip_acts
176
+ return output * x_mask
177
+
178
+ def remove_weight_norm(self):
179
+ if self.gin_channels != 0:
180
+ torch.nn.utils.remove_weight_norm(self.cond_layer)
181
+ for l in self.in_layers:
182
+ torch.nn.utils.remove_weight_norm(l)
183
+ for l in self.res_skip_layers:
184
+ torch.nn.utils.remove_weight_norm(l)
185
+
186
+
187
+ class ResBlock1(torch.nn.Module):
188
+ def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
189
+ super(ResBlock1, self).__init__()
190
+ self.convs1 = nn.ModuleList([
191
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
192
+ padding=get_padding(kernel_size, dilation[0]))),
193
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
194
+ padding=get_padding(kernel_size, dilation[1]))),
195
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
196
+ padding=get_padding(kernel_size, dilation[2])))
197
+ ])
198
+ self.convs1.apply(init_weights)
199
+
200
+ self.convs2 = nn.ModuleList([
201
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
202
+ padding=get_padding(kernel_size, 1))),
203
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
204
+ padding=get_padding(kernel_size, 1))),
205
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
206
+ padding=get_padding(kernel_size, 1)))
207
+ ])
208
+ self.convs2.apply(init_weights)
209
+
210
+ def forward(self, x, x_mask=None):
211
+ for c1, c2 in zip(self.convs1, self.convs2):
212
+ xt = F.leaky_relu(x, LRELU_SLOPE)
213
+ if x_mask is not None:
214
+ xt = xt * x_mask
215
+ xt = c1(xt)
216
+ xt = F.leaky_relu(xt, LRELU_SLOPE)
217
+ if x_mask is not None:
218
+ xt = xt * x_mask
219
+ xt = c2(xt)
220
+ x = xt + x
221
+ if x_mask is not None:
222
+ x = x * x_mask
223
+ return x
224
+
225
+ def remove_weight_norm(self):
226
+ for l in self.convs1:
227
+ remove_weight_norm(l)
228
+ for l in self.convs2:
229
+ remove_weight_norm(l)
230
+
231
+
232
+ class ResBlock2(torch.nn.Module):
233
+ def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
234
+ super(ResBlock2, self).__init__()
235
+ self.convs = nn.ModuleList([
236
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
237
+ padding=get_padding(kernel_size, dilation[0]))),
238
+ weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
239
+ padding=get_padding(kernel_size, dilation[1])))
240
+ ])
241
+ self.convs.apply(init_weights)
242
+
243
+ def forward(self, x, x_mask=None):
244
+ for c in self.convs:
245
+ xt = F.leaky_relu(x, LRELU_SLOPE)
246
+ if x_mask is not None:
247
+ xt = xt * x_mask
248
+ xt = c(xt)
249
+ x = xt + x
250
+ if x_mask is not None:
251
+ x = x * x_mask
252
+ return x
253
+
254
+ def remove_weight_norm(self):
255
+ for l in self.convs:
256
+ remove_weight_norm(l)
257
+
258
+
259
+ class Log(nn.Module):
260
+ def forward(self, x, x_mask, reverse=False, **kwargs):
261
+ if not reverse:
262
+ y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
263
+ logdet = torch.sum(-y, [1, 2])
264
+ return y, logdet
265
+ else:
266
+ x = torch.exp(x) * x_mask
267
+ return x
268
+
269
+
270
+ class Flip(nn.Module):
271
+ def forward(self, x, *args, reverse=False, **kwargs):
272
+ x = torch.flip(x, [1])
273
+ if not reverse:
274
+ logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
275
+ return x, logdet
276
+ else:
277
+ return x
278
+
279
+
280
+ class ElementwiseAffine(nn.Module):
281
+ def __init__(self, channels):
282
+ super().__init__()
283
+ self.channels = channels
284
+ self.m = nn.Parameter(torch.zeros(channels,1))
285
+ self.logs = nn.Parameter(torch.zeros(channels,1))
286
+
287
+ def forward(self, x, x_mask, reverse=False, **kwargs):
288
+ if not reverse:
289
+ y = self.m + torch.exp(self.logs) * x
290
+ y = y * x_mask
291
+ logdet = torch.sum(self.logs * x_mask, [1,2])
292
+ return y, logdet
293
+ else:
294
+ x = (x - self.m) * torch.exp(-self.logs) * x_mask
295
+ return x
296
+
297
+
298
+ class ResidualCouplingLayer(nn.Module):
299
+ def __init__(self,
300
+ channels,
301
+ hidden_channels,
302
+ kernel_size,
303
+ dilation_rate,
304
+ n_layers,
305
+ p_dropout=0,
306
+ gin_channels=0,
307
+ mean_only=False):
308
+ assert channels % 2 == 0, "channels should be divisible by 2"
309
+ super().__init__()
310
+ self.channels = channels
311
+ self.hidden_channels = hidden_channels
312
+ self.kernel_size = kernel_size
313
+ self.dilation_rate = dilation_rate
314
+ self.n_layers = n_layers
315
+ self.half_channels = channels // 2
316
+ self.mean_only = mean_only
317
+
318
+ self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
319
+ self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
320
+ self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
321
+ self.post.weight.data.zero_()
322
+ self.post.bias.data.zero_()
323
+
324
+ def forward(self, x, x_mask, g=None, reverse=False):
325
+ x0, x1 = torch.split(x, [self.half_channels]*2, 1)
326
+ h = self.pre(x0) * x_mask
327
+ h = self.enc(h, x_mask, g=g)
328
+ stats = self.post(h) * x_mask
329
+ if not self.mean_only:
330
+ m, logs = torch.split(stats, [self.half_channels]*2, 1)
331
+ else:
332
+ m = stats
333
+ logs = torch.zeros_like(m)
334
+
335
+ if not reverse:
336
+ x1 = m + x1 * torch.exp(logs) * x_mask
337
+ x = torch.cat([x0, x1], 1)
338
+ logdet = torch.sum(logs, [1,2])
339
+ return x, logdet
340
+ else:
341
+ x1 = (x1 - m) * torch.exp(-logs) * x_mask
342
+ x = torch.cat([x0, x1], 1)
343
+ return x
344
+
345
+
346
+ class ConvFlow(nn.Module):
347
+ def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
348
+ super().__init__()
349
+ self.in_channels = in_channels
350
+ self.filter_channels = filter_channels
351
+ self.kernel_size = kernel_size
352
+ self.n_layers = n_layers
353
+ self.num_bins = num_bins
354
+ self.tail_bound = tail_bound
355
+ self.half_channels = in_channels // 2
356
+
357
+ self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
358
+ self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
359
+ self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
360
+ self.proj.weight.data.zero_()
361
+ self.proj.bias.data.zero_()
362
+
363
+ def forward(self, x, x_mask, g=None, reverse=False):
364
+ x0, x1 = torch.split(x, [self.half_channels]*2, 1)
365
+ h = self.pre(x0)
366
+ h = self.convs(h, x_mask, g=g)
367
+ h = self.proj(h) * x_mask
368
+
369
+ b, c, t = x0.shape
370
+ h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
371
+
372
+ unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
373
+ unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
374
+ unnormalized_derivatives = h[..., 2 * self.num_bins:]
375
+
376
+ x1, logabsdet = piecewise_rational_quadratic_transform(x1,
377
+ unnormalized_widths,
378
+ unnormalized_heights,
379
+ unnormalized_derivatives,
380
+ inverse=reverse,
381
+ tails='linear',
382
+ tail_bound=self.tail_bound
383
+ )
384
+
385
+ x = torch.cat([x0, x1], 1) * x_mask
386
+ logdet = torch.sum(logabsdet * x_mask, [1,2])
387
+ if not reverse:
388
+ return x, logdet
389
+ else:
390
+ return x
preprocess_v2.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import argparse
3
+ import json
4
+ if __name__ == "__main__":
5
+ parser = argparse.ArgumentParser()
6
+ parser.add_argument("--add_auxiliary_data", type=bool, help="Whether to add extra data as fine-tuning helper")
7
+ parser.add_argument("--languages", default="CJE")
8
+ args = parser.parse_args()
9
+ if args.languages == "CJE":
10
+ langs = ["[ZH]", "[JA]", "[EN]"]
11
+ elif args.languages == "CJ":
12
+ langs = ["[ZH]", "[JA]"]
13
+ elif args.languages == "C":
14
+ langs = ["[ZH]"]
15
+ new_annos = []
16
+ # Source 1: transcribed short audios
17
+ if os.path.exists("short_character_anno.txt"):
18
+ with open("short_character_anno.txt", 'r', encoding='utf-8') as f:
19
+ short_character_anno = f.readlines()
20
+ new_annos += short_character_anno
21
+ # Source 2: transcribed long audio segments
22
+ if os.path.exists("long_character_anno.txt"):
23
+ with open("long_character_anno.txt", 'r', encoding='utf-8') as f:
24
+ long_character_anno = f.readlines()
25
+ new_annos += long_character_anno
26
+
27
+ # Get all speaker names
28
+ speakers = []
29
+ for line in new_annos:
30
+ path, speaker, text = line.split("|")
31
+ if speaker not in speakers:
32
+ speakers.append(speaker)
33
+ assert (len(speakers) != 0), "No audio file found. Please check your uploaded file structure."
34
+ # Source 3 (Optional): sampled audios as extra training helpers
35
+ if args.add_auxiliary_data:
36
+ with open("sampled_audio4ft.txt", 'r', encoding='utf-8') as f:
37
+ old_annos = f.readlines()
38
+ # filter old_annos according to supported languages
39
+ filtered_old_annos = []
40
+ for line in old_annos:
41
+ for lang in langs:
42
+ if lang in line:
43
+ filtered_old_annos.append(line)
44
+ old_annos = filtered_old_annos
45
+ for line in old_annos:
46
+ path, speaker, text = line.split("|")
47
+ if speaker not in speakers:
48
+ speakers.append(speaker)
49
+ num_old_voices = len(old_annos)
50
+ num_new_voices = len(new_annos)
51
+ # STEP 1: balance number of new & old voices
52
+ cc_duplicate = num_old_voices // num_new_voices
53
+ if cc_duplicate == 0:
54
+ cc_duplicate = 1
55
+
56
+
57
+ # STEP 2: modify config file
58
+ with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f:
59
+ hps = json.load(f)
60
+
61
+ # assign ids to new speakers
62
+ speaker2id = {}
63
+ for i, speaker in enumerate(speakers):
64
+ speaker2id[speaker] = i
65
+ # modify n_speakers
66
+ hps['data']["n_speakers"] = len(speakers)
67
+ # overwrite speaker names
68
+ hps['speakers'] = speaker2id
69
+ hps['train']['log_interval'] = 100
70
+ hps['train']['eval_interval'] = 1000
71
+ hps['train']['batch_size'] = 16
72
+ hps['data']['training_files'] = "final_annotation_train.txt"
73
+ hps['data']['validation_files'] = "final_annotation_val.txt"
74
+ # save modified config
75
+ with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f:
76
+ json.dump(hps, f, indent=2)
77
+
78
+ # STEP 3: clean annotations, replace speaker names with assigned speaker IDs
79
+ import text
80
+ cleaned_new_annos = []
81
+ for i, line in enumerate(new_annos):
82
+ path, speaker, txt = line.split("|")
83
+ if len(txt) > 150:
84
+ continue
85
+ cleaned_text = text._clean_text(txt, hps['data']['text_cleaners'])
86
+ cleaned_text += "\n" if not cleaned_text.endswith("\n") else ""
87
+ cleaned_new_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text)
88
+ cleaned_old_annos = []
89
+ for i, line in enumerate(old_annos):
90
+ path, speaker, txt = line.split("|")
91
+ if len(txt) > 150:
92
+ continue
93
+ cleaned_text = text._clean_text(txt, hps['data']['text_cleaners'])
94
+ cleaned_text += "\n" if not cleaned_text.endswith("\n") else ""
95
+ cleaned_old_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text)
96
+ # merge with old annotation
97
+ final_annos = cleaned_old_annos + cc_duplicate * cleaned_new_annos
98
+ # save annotation file
99
+ with open("final_annotation_train.txt", 'w', encoding='utf-8') as f:
100
+ for line in final_annos:
101
+ f.write(line)
102
+ # save annotation file for validation
103
+ with open("final_annotation_val.txt", 'w', encoding='utf-8') as f:
104
+ for line in cleaned_new_annos:
105
+ f.write(line)
106
+ print("finished")
107
+ else:
108
+ # Do not add extra helper data
109
+ # STEP 1: modify config file
110
+ with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f:
111
+ hps = json.load(f)
112
+
113
+ # assign ids to new speakers
114
+ speaker2id = {}
115
+ for i, speaker in enumerate(speakers):
116
+ speaker2id[speaker] = i
117
+ # modify n_speakers
118
+ hps['data']["n_speakers"] = len(speakers)
119
+ # overwrite speaker names
120
+ hps['speakers'] = speaker2id
121
+ hps['train']['log_interval'] = 10
122
+ hps['train']['eval_interval'] = 100
123
+ hps['train']['batch_size'] = 16
124
+ hps['data']['training_files'] = "final_annotation_train.txt"
125
+ hps['data']['validation_files'] = "final_annotation_val.txt"
126
+ # save modified config
127
+ with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f:
128
+ json.dump(hps, f, indent=2)
129
+
130
+ # STEP 2: clean annotations, replace speaker names with assigned speaker IDs
131
+ import text
132
+
133
+ cleaned_new_annos = []
134
+ for i, line in enumerate(new_annos):
135
+ path, speaker, txt = line.split("|")
136
+ if len(txt) > 150:
137
+ continue
138
+ cleaned_text = text._clean_text(txt, hps['data']['text_cleaners']).replace("[ZH]", "")
139
+ cleaned_text += "\n" if not cleaned_text.endswith("\n") else ""
140
+ cleaned_new_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text)
141
+
142
+ final_annos = cleaned_new_annos
143
+ # save annotation file
144
+ with open("final_annotation_train.txt", 'w', encoding='utf-8') as f:
145
+ for line in final_annos:
146
+ f.write(line)
147
+ # save annotation file for validation
148
+ with open("final_annotation_val.txt", 'w', encoding='utf-8') as f:
149
+ for line in cleaned_new_annos:
150
+ f.write(line)
151
+ print("finished")
rearrange_speaker.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import argparse
3
+ import json
4
+
5
+ if __name__ == "__main__":
6
+ parser = argparse.ArgumentParser()
7
+ parser.add_argument("--model_dir", type=str, default="./OUTPUT_MODEL/G_latest.pth")
8
+ parser.add_argument("--config_dir", type=str, default="./configs/modified_finetune_speaker.json")
9
+ args = parser.parse_args()
10
+
11
+ model_sd = torch.load(args.model_dir, map_location='cpu')
12
+ with open(args.config_dir, 'r', encoding='utf-8') as f:
13
+ hps = json.load(f)
14
+
15
+ valid_speakers = list(hps['speakers'].keys())
16
+ if hps['data']['n_speakers'] > len(valid_speakers):
17
+ new_emb_g = torch.zeros([len(valid_speakers), 256])
18
+ old_emb_g = model_sd['model']['emb_g.weight']
19
+ for i, speaker in enumerate(valid_speakers):
20
+ new_emb_g[i, :] = old_emb_g[hps['speakers'][speaker], :]
21
+ hps['speakers'][speaker] = i
22
+ hps['data']['n_speakers'] = len(valid_speakers)
23
+ model_sd['model']['emb_g.weight'] = new_emb_g
24
+ with open("./finetune_speaker.json", 'w', encoding='utf-8') as f:
25
+ json.dump(hps, f, indent=2)
26
+ torch.save(model_sd, "./G_latest.pth")
27
+ else:
28
+ with open("./finetune_speaker.json", 'w', encoding='utf-8') as f:
29
+ json.dump(hps, f, indent=2)
30
+ torch.save(model_sd, "./G_latest.pth")
31
+ # save another config file copy in MoeGoe format
32
+ hps['speakers'] = valid_speakers
33
+ with open("./moegoe_config.json", 'w', encoding='utf-8') as f:
34
+ json.dump(hps, f, indent=2)
35
+
36
+
37
+
requirements.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Cython
2
+ librosa==0.9.1
3
+ numpy
4
+ scipy
5
+ tensorboard
6
+ torch --extra-index-url https://download.pytorch.org/whl/cu116
7
+ torchvision --extra-index-url https://download.pytorch.org/whl/cu116
8
+ torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
9
+ unidecode
10
+ pyopenjtalk
11
+ jamo
12
+ pypinyin
13
+ jieba
14
+ protobuf
15
+ cn2an
16
+ inflect
17
+ eng_to_ipa
18
+ ko_pron
19
+ indic_transliteration==2.3.37
20
+ num_thai==0.0.5
21
+ opencc==1.1.1
22
+ demucs
23
+ openai-whisper
24
+ gradio
sampled_audio4ft.txt ADDED
The diff for this file is too large to render. See raw diff