Respair commited on
Commit
2848f40
1 Parent(s): 1c22955

Delete voices/readme.md

Browse files
Files changed (1) hide show
  1. voices/readme.md +0 -127
voices/readme.md DELETED
@@ -1,127 +0,0 @@
1
- # Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions
2
-
3
- ### Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani
4
-
5
- > Large-scale pre-trained language models have been shown to be helpful in improving the naturalness of text-to-speech (TTS) models by enabling them to produce more naturalistic prosodic patterns. However, these models are usually word-level or sup-phoneme-level and jointly trained with phonemes, making them inefficient for the downstream TTS task where only phonemes are needed. In this work, we propose a phoneme-level BERT (PL-BERT) with a pretext task of predicting the corresponding graphemes along with the regular masked phoneme predictions. Subjective evaluations show that our phoneme-level BERT encoder has significantly improved the mean opinion scores (MOS) of rated naturalness of synthesized speech compared with the state-of-the-art (SOTA) StyleTTS baseline on out-of-distribution (OOD) texts.
6
-
7
- Paper: [https://arxiv.org/abs/2301.08810](https://arxiv.org/abs/2301.08810)
8
-
9
- Audio samples: [https://pl-bert.github.io/](https://pl-bert.github.io/)
10
-
11
- ## Pre-requisites
12
- 1. Python >= 3.7
13
- 2. Clone this repository:
14
- ```bash
15
- git clone https://github.com/yl4579/PL-BERT.git
16
- cd PL-BERT
17
- ```
18
- 3. Create a new environment (recommended):
19
- ```bash
20
- conda create --name BERT python=3.8
21
- conda activate BERT
22
- python -m ipykernel install --user --name BERT --display-name "BERT"
23
- ```
24
- 4. Install python requirements:
25
- ```bash
26
- pip install pandas singleton-decorator datasets "transformers<4.33.3" accelerate nltk phonemizer sacremoses pebble
27
- ```
28
-
29
- ## Preprocessing
30
- Please refer to the notebook [preprocess.ipynb](https://github.com/yl4579/PL-BERT/blob/main/preprocess.ipynb) for more details. The preprocessing is for English Wikipedia dataset only. I will make a new branch for Japanese if I have extra time to demostrate training on other languages. You may also refer to [#6](https://github.com/yl4579/PL-BERT/issues/6#issuecomment-1797869275) for preprocessing in other languages like Japanese.
31
-
32
- ## Trianing
33
- Please run each cell in the notebook [train.ipynb](https://github.com/yl4579/PL-BERT/blob/main/train.ipynb). You will need to change the line
34
- `config_path = "Configs/config.yml"` in cell 2 if you wish to use a different config file. The training code is in Jupyter notebook primarily because the initial epxeriment was conducted in Jupyter notebook, but you can easily make it a Python script if you want to.
35
-
36
- ## Finetuning
37
- Here is an example of how to use it for StyleTTS finetuning. You can use it for other TTS models by replacing the text encoder with the pre-trained PL-BERT.
38
- 1. Modify line 683 in [models.py](https://github.com/yl4579/StyleTTS/blob/main/models.py#L683) with the following code to load BERT model in to StyleTTS:
39
- ```python
40
- from transformers import AlbertConfig, AlbertModel
41
-
42
- log_dir = "YOUR PL-BERT CHECKPOINT PATH"
43
- config_path = os.path.join(log_dir, "config.yml")
44
- plbert_config = yaml.safe_load(open(config_path))
45
-
46
- albert_base_configuration = AlbertConfig(**plbert_config['model_params'])
47
- bert = AlbertModel(albert_base_configuration)
48
-
49
- files = os.listdir(log_dir)
50
- ckpts = []
51
- for f in os.listdir(log_dir):
52
- if f.startswith("step_"): ckpts.append(f)
53
-
54
- iters = [int(f.split('_')[-1].split('.')[0]) for f in ckpts if os.path.isfile(os.path.join(log_dir, f))]
55
- iters = sorted(iters)[-1]
56
-
57
- checkpoint = torch.load(log_dir + "/step_" + str(iters) + ".t7", map_location='cpu')
58
- state_dict = checkpoint['net']
59
- from collections import OrderedDict
60
- new_state_dict = OrderedDict()
61
- for k, v in state_dict.items():
62
- name = k[7:] # remove `module.`
63
- if name.startswith('encoder.'):
64
- name = name[8:] # remove `encoder.`
65
- new_state_dict[name] = v
66
- bert.load_state_dict(new_state_dict)
67
-
68
- nets = Munch(bert=bert,
69
- # linear projection to match the hidden size (BERT 768, StyleTTS 512)
70
- bert_encoder=nn.Linear(plbert_config['model_params']['hidden_size'], args.hidden_dim),
71
- predictor=predictor,
72
- decoder=decoder,
73
- pitch_extractor=pitch_extractor,
74
- text_encoder=text_encoder,
75
- style_encoder=style_encoder,
76
- text_aligner = text_aligner,
77
- discriminator=discriminator)
78
- ```
79
- 2. Modify line 126 in [train_second.py](https://github.com/yl4579/StyleTTS/blob/main/train_second.py#L126) with the following code to adjust the learning rate of BERT model:
80
- ```python
81
- # for stability
82
- for g in optimizer.optimizers['bert'].param_groups:
83
- g['betas'] = (0.9, 0.99)
84
- g['lr'] = 1e-5
85
- g['initial_lr'] = 1e-5
86
- g['min_lr'] = 0
87
- g['weight_decay'] = 0.01
88
- ```
89
- 3. Modify line 211 in [train_second.py](https://github.com/yl4579/StyleTTS/blob/main/train_second.py#L211) with the following code to replace text encoder with BERT encoder:
90
- ```python
91
- bert_dur = model.bert(texts, attention_mask=(~text_mask).int()).last_hidden_state
92
- d_en = model.bert_encoder(bert_dur).transpose(-1, -2)
93
- d, _ = model.predictor(d_en, s,
94
- input_lengths,
95
- s2s_attn_mono,
96
- m)
97
- ```
98
- [line 257](https://github.com/yl4579/StyleTTS/blob/main/train_second.py#L257):
99
- ```python
100
- _, p = model.predictor(d_en, s,
101
- input_lengths,
102
- s2s_attn_mono,
103
- m)
104
- ```
105
- and [line 415](https://github.com/yl4579/StyleTTS/blob/main/train_second.py#L415):
106
- ```python
107
- bert_dur = model.bert(texts, attention_mask=(~text_mask).int()).last_hidden_state
108
- d_en = model.bert_encoder(bert_dur).transpose(-1, -2)
109
- d, p = model.predictor(d_en, s,
110
- input_lengths,
111
- s2s_attn_mono,
112
- m)
113
- ```
114
-
115
- 4. Modify line 347 in [train_second.py](https://github.com/yl4579/StyleTTS/blob/main/train_second.py#L347) with the following code to make sure parameters of BERT model are updated:
116
- ```python
117
- optimizer.step('bert_encoder')
118
- optimizer.step('bert')
119
- ```
120
-
121
- The pre-trained PL-BERT on Wikipedia for 1M steps can be downloaded at: [PL-BERT link](https://drive.google.com/file/d/19gzPmWKdmakeVszSNuUtVMMBaFYMQqJ7/view?usp=sharing).
122
-
123
- The demo on LJSpeech dataset along with the pre-modified StyleTTS repo and pre-trained models can be downloaded here: [StyleTTS Link](https://drive.google.com/file/d/18DU4JrW1rhySrIk-XSxZkXt2MuznxoM-/view?usp=sharing). This zip file contains the code modification above, the pre-trained PL-BERT model listed above, pre-trained StyleTTS w/ PL-BERT, pre-trained StyleTTS w/o PL-BERT and pre-trained HifiGAN on LJSpeech from the StyleTTS repo.
124
-
125
- ## References
126
- - [NVIDIA/NeMo-text-processing](https://github.com/NVIDIA/NeMo-text-processing)
127
- - [tomaarsen/TTSTextNormalization](https://github.com/tomaarsen/TTSTextNormalization)