LH-Tech-AI commited on
Commit
d51947d
·
0 Parent(s):

Duplicate from LH-Tech-AI/Flare-TTS-28M

Browse files
Files changed (6) hide show
  1. .gitattributes +35 -0
  2. README.md +49 -0
  3. config.json +198 -0
  4. model.pth +3 -0
  5. prepare.sh +7 -0
  6. train_glowtts.py +71 -0
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-to-speech
5
+ tags:
6
+ - tts
7
+ - flare
8
+ - open
9
+ - open-source
10
+ - small
11
+ - speech
12
+ - text-to-speech
13
+ - tiny
14
+ - cpu
15
+ datasets:
16
+ - keithito/lj_speech
17
+ ---
18
+
19
+ # 🎙️ Flare-TTS 28M
20
+ Welcome to Flare-TTS 28M, an open-source text-to-speech model with 28 million parameters trained on LJSpeech.
21
+
22
+ ## Quality and results
23
+ This model is okayish quality but it still sounds a bit robotish but you can clearly understand what the model tries to say.
24
+ See this model as a proof-of-concept or a first-beta.
25
+ Example:
26
+ <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/697f2832c2c5e4daa93cece7/vluuHSnp9Ietk7Uk1-hvG.mpga"></audio>
27
+
28
+ ## Training process
29
+ We trained this model for ~300 epochs on a single A6000 GPU for ~24 hours.
30
+ The full training code can be found in this repo as `start.sh` and `train.py`. Just run `start.sh` to train this model yourself.
31
+
32
+ ## Architecture
33
+ This model was trained using CoquiTTS. For the architecture we chose GlowTTS.
34
+
35
+ ## Training dataset
36
+ We trained on the full LJSpeech dataset. Thanks to keithito for this :-)
37
+
38
+ ## How to use
39
+ As soon as you have the model checkpoint (`model.pth`) and `config.json` on your device, you can generate a sample using:
40
+ ```bash
41
+ tts --text "Hello world, this is my first trained TTS model." \
42
+ --model_path model.pth \
43
+ --config_path config.json \
44
+ --out_path output_1.wav
45
+ ```
46
+
47
+ ## Final thoughts
48
+ We don't think it's perfect - it's more like a proof of concept. So please do not use this model for production use cases but more for experiments.
49
+ We are happy to share more of this soon - stay tuned for Flare-TTS v2 :D
config.json ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "output_path": "/home/ubuntu",
3
+ "logger_uri": null,
4
+ "run_name": "run",
5
+ "project_name": null,
6
+ "run_description": "\ud83d\udc38Coqui trainer run.",
7
+ "print_step": 25,
8
+ "plot_step": 100,
9
+ "model_param_stats": false,
10
+ "wandb_entity": null,
11
+ "dashboard_logger": "tensorboard",
12
+ "save_on_interrupt": true,
13
+ "log_model_step": null,
14
+ "save_step": 10000,
15
+ "save_n_checkpoints": 5,
16
+ "save_checkpoints": true,
17
+ "save_all_best": false,
18
+ "save_best_after": 0,
19
+ "target_loss": null,
20
+ "print_eval": false,
21
+ "test_delay_epochs": -1,
22
+ "run_eval": true,
23
+ "run_eval_steps": null,
24
+ "distributed_backend": "nccl",
25
+ "distributed_url": "tcp://localhost:54321",
26
+ "mixed_precision": true,
27
+ "precision": "fp16",
28
+ "epochs": 600,
29
+ "batch_size": 256,
30
+ "eval_batch_size": 128,
31
+ "grad_clip": 5.0,
32
+ "scheduler_after_epoch": true,
33
+ "lr": 0.001,
34
+ "optimizer": "RAdam",
35
+ "optimizer_params": {
36
+ "betas": [
37
+ 0.9,
38
+ 0.998
39
+ ],
40
+ "weight_decay": 1e-06
41
+ },
42
+ "lr_scheduler": "NoamLR",
43
+ "lr_scheduler_params": {
44
+ "warmup_steps": 4000
45
+ },
46
+ "use_grad_scaler": false,
47
+ "allow_tf32": false,
48
+ "cudnn_enable": true,
49
+ "cudnn_deterministic": false,
50
+ "cudnn_benchmark": false,
51
+ "training_seed": 54321,
52
+ "model": "glow_tts",
53
+ "num_loader_workers": 4,
54
+ "num_eval_loader_workers": 2,
55
+ "use_noise_augment": false,
56
+ "audio": {
57
+ "fft_size": 1024,
58
+ "win_length": 1024,
59
+ "hop_length": 256,
60
+ "frame_shift_ms": null,
61
+ "frame_length_ms": null,
62
+ "stft_pad_mode": "reflect",
63
+ "sample_rate": 22050,
64
+ "resample": false,
65
+ "preemphasis": 0.0,
66
+ "ref_level_db": 20,
67
+ "do_sound_norm": false,
68
+ "log_func": "np.log10",
69
+ "do_trim_silence": true,
70
+ "trim_db": 45,
71
+ "do_rms_norm": false,
72
+ "db_level": null,
73
+ "power": 1.5,
74
+ "griffin_lim_iters": 60,
75
+ "num_mels": 80,
76
+ "mel_fmin": 0.0,
77
+ "mel_fmax": null,
78
+ "spec_gain": 20,
79
+ "do_amp_to_db_linear": true,
80
+ "do_amp_to_db_mel": true,
81
+ "pitch_fmax": 640.0,
82
+ "pitch_fmin": 1.0,
83
+ "signal_norm": true,
84
+ "min_level_db": -100,
85
+ "symmetric_norm": true,
86
+ "max_norm": 4.0,
87
+ "clip_norm": true,
88
+ "stats_path": null
89
+ },
90
+ "model_args": {},
91
+ "_supports_cloning": false,
92
+ "languages": [
93
+ "en-us"
94
+ ],
95
+ "speakers": [],
96
+ "use_phonemes": true,
97
+ "phonemizer": "espeak",
98
+ "phoneme_language": "en-us",
99
+ "compute_input_seq_cache": false,
100
+ "text_cleaner": "phoneme_cleaners",
101
+ "enable_eos_bos_chars": false,
102
+ "test_sentences_file": "",
103
+ "phoneme_cache_path": "/home/ubuntu/phoneme_cache",
104
+ "characters": {
105
+ "characters_class": "TTS.tts.utils.text.characters.IPAPhonemes",
106
+ "vocab_dict": null,
107
+ "pad": "<PAD>",
108
+ "eos": "<EOS>",
109
+ "bos": "<BOS>",
110
+ "blank": "<BLNK>",
111
+ "characters": "iy\u0268\u0289\u026fu\u026a\u028f\u028ae\u00f8\u0258\u0259\u0275\u0264o\u025b\u0153\u025c\u025e\u028c\u0254\u00e6\u0250a\u0276\u0251\u0252\u1d7b\u0298\u0253\u01c0\u0257\u01c3\u0284\u01c2\u0260\u01c1\u029bpbtd\u0288\u0256c\u025fk\u0261q\u0262\u0294\u0274\u014b\u0272\u0273n\u0271m\u0299r\u0280\u2c71\u027e\u027d\u0278\u03b2fv\u03b8\u00f0sz\u0283\u0292\u0282\u0290\u00e7\u029dx\u0263\u03c7\u0281\u0127\u0295h\u0266\u026c\u026e\u028b\u0279\u027bj\u0270l\u026d\u028e\u029f\u02c8\u02cc\u02d0\u02d1\u028dw\u0265\u029c\u02a2\u02a1\u0255\u0291\u027a\u0267\u02b2\u0303\u025a\u02de\u026b",
112
+ "punctuations": "!'(),-.:;? ",
113
+ "phonemes": null,
114
+ "is_unique": false,
115
+ "is_sorted": true
116
+ },
117
+ "add_blank": false,
118
+ "batch_group_size": 0,
119
+ "loss_masking": null,
120
+ "min_audio_len": 22050,
121
+ "max_audio_len": 220500,
122
+ "min_text_len": 1,
123
+ "max_text_len": Infinity,
124
+ "compute_f0": false,
125
+ "compute_energy": false,
126
+ "compute_linear_spec": false,
127
+ "precompute_num_workers": 0,
128
+ "start_by_longest": false,
129
+ "shuffle": false,
130
+ "drop_last": false,
131
+ "datasets": [
132
+ {
133
+ "formatter": "ljspeech",
134
+ "dataset_name": "",
135
+ "path": "/home/ubuntu/LJSpeech-1.1/",
136
+ "meta_file_train": "metadata.csv",
137
+ "ignored_speakers": null,
138
+ "language": "",
139
+ "phonemizer": "",
140
+ "meta_file_val": "",
141
+ "meta_file_attn_mask": ""
142
+ }
143
+ ],
144
+ "test_sentences": [
145
+ "It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
146
+ "Be a voice, not an echo.",
147
+ "I'm sorry Dave. I'm afraid I can't do that.",
148
+ "This cake is great. It's so delicious and moist.",
149
+ "Prior to November 22, 1963."
150
+ ],
151
+ "eval_split_max_size": null,
152
+ "eval_split_size": 0.01,
153
+ "use_speaker_weighted_sampler": false,
154
+ "speaker_weighted_sampler_alpha": 1.0,
155
+ "use_language_weighted_sampler": false,
156
+ "language_weighted_sampler_alpha": 1.0,
157
+ "use_length_weighted_sampler": false,
158
+ "length_weighted_sampler_alpha": 1.0,
159
+ "num_chars": 132,
160
+ "use_encoder_prenet": true,
161
+ "hidden_channels_enc": 192,
162
+ "hidden_channels_dec": 192,
163
+ "hidden_channels_dp": 256,
164
+ "dropout_p_dp": 0.1,
165
+ "dropout_p_dec": 0.05,
166
+ "mean_only": true,
167
+ "out_channels": 80,
168
+ "num_flow_blocks_dec": 12,
169
+ "kernel_size_dec": 5,
170
+ "dilation_rate": 1,
171
+ "num_block_layers": 4,
172
+ "c_in_channels": 0,
173
+ "num_splits": 4,
174
+ "num_squeeze": 2,
175
+ "sigmoid_scale": false,
176
+ "encoder_type": "rel_pos_transformer",
177
+ "encoder_params": {
178
+ "kernel_size": 3,
179
+ "dropout_p": 0.1,
180
+ "num_layers": 6,
181
+ "num_heads": 2,
182
+ "hidden_channels_ffn": 768,
183
+ "input_length": null
184
+ },
185
+ "d_vector_dim": 0,
186
+ "data_dep_init_steps": 10,
187
+ "style_wav_for_test": null,
188
+ "inference_noise_scale": 0.0,
189
+ "length_scale": 1.0,
190
+ "use_speaker_embedding": false,
191
+ "speakers_file": null,
192
+ "use_d_vector_file": false,
193
+ "d_vector_file": null,
194
+ "min_seq_len": 3,
195
+ "max_seq_len": 500,
196
+ "r": 1,
197
+ "github_branch": "inside_docker"
198
+ }
model.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52e49ef2cc522d6f5fad8b6bd6932cfaf8bc42ea22911d9d63ffbdda951a44f3
3
+ size 343850470
prepare.sh ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ pip install git+https://github.com/idiap/coqui-tts.git
2
+ sudo apt update && sudo apt install espeak -y
3
+ sudo apt install ffmpeg libavcodec-dev libavformat-dev libavutil-dev -y
4
+ pip install "coqui-tts[codec]"
5
+ wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
6
+ tar -xjf LJSpeech-1.1.tar.bz2
7
+ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python3 train_glowtts.py
train_glowtts.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import inspect
3
+ from trainer import Trainer, TrainerArgs
4
+ from TTS.tts.configs.glow_tts_config import GlowTTSConfig
5
+ from TTS.tts.models.glow_tts import GlowTTS
6
+ from TTS.tts.configs.shared_configs import BaseDatasetConfig
7
+ from TTS.tts.datasets import load_tts_samples
8
+ from TTS.tts.utils.text.tokenizer import TTSTokenizer
9
+ from TTS.utils.audio import AudioProcessor
10
+
11
+ def main():
12
+ output_path = os.path.dirname(os.path.abspath(__file__))
13
+
14
+ dataset_config = BaseDatasetConfig(
15
+ formatter="ljspeech",
16
+ meta_file_train="metadata.csv",
17
+ path=os.path.join(output_path, "LJSpeech-1.1/")
18
+ )
19
+
20
+ config = GlowTTSConfig(
21
+ batch_size=256,
22
+ eval_batch_size=128,
23
+ num_loader_workers=4,
24
+ num_eval_loader_workers=2,
25
+ run_eval=True,
26
+ test_delay_epochs=-1,
27
+ epochs=600,
28
+ text_cleaner="phoneme_cleaners",
29
+ use_phonemes=True,
30
+ phoneme_language="en-us",
31
+ phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
32
+ print_step=25,
33
+ print_eval=False,
34
+ mixed_precision=True,
35
+ output_path=output_path,
36
+ datasets=[dataset_config],
37
+ max_audio_len=22050 * 10,
38
+ min_audio_len=22050 * 1,
39
+ )
40
+
41
+ ap = AudioProcessor(config=config.audio)
42
+
43
+ tokenizer, config = TTSTokenizer.init_from_config(config)
44
+
45
+ train_samples, eval_samples = load_tts_samples(
46
+ config,
47
+ eval_split=True,
48
+ eval_split_max_size=20,
49
+ )
50
+
51
+ model = GlowTTS(config, ap, tokenizer=tokenizer, speaker_manager=None)
52
+
53
+ trainer = Trainer(
54
+ TrainerArgs(),
55
+ config,
56
+ output_path,
57
+ model=model,
58
+ train_samples=train_samples,
59
+ eval_samples=eval_samples,
60
+ training_assets={'audio_processor': ap},
61
+ )
62
+
63
+ if getattr(trainer, "best_loss", None) is None:
64
+ trainer.best_loss = {"train_loss": float("inf")}
65
+ elif isinstance(trainer.best_loss, dict) and trainer.best_loss.get("train_loss") is None:
66
+ trainer.best_loss["train_loss"] = float("inf")
67
+
68
+ trainer.fit()
69
+
70
+ if __name__ == "__main__":
71
+ main()