File size: 1,896 Bytes
7f668da 48713a8 0ebc8ef 48713a8 de902db dbcf279 246cf8f dbcf279 b9515f6 246cf8f b9515f6 5252dd6 b9515f6 e276f17 b9515f6 4d7d9d3 246cf8f 4d7d9d3 dbcf279 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
tags:
- tensorflowtts
- audio
- text-to-speech
- text-to-mel
language: vi
license: apache-2.0
datasets:
- infore
---
# Install TensorFlowTTS
```
pip install TensorFlowTTS
```
## Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import IPython.display as ipd
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("MarcNg/fastspeech2-vi-infore")
fastspeech2 = TFAutoModel.from_pretrained("MarcNg/fastspeech2-vi-infore")
text = "xin chào đây là một ví dụ về chuyển đổi văn bản thành giọng nói"
input_ids = processor.text_to_sequence(text)
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
)
```
## Bonus: Convert Mel Spectrogram to Speech
```python
mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en")
audio_before = mb_melgan.inference(mel_before)[0, :, 0]
audio_after = mb_melgan.inference(mel_after)[0, :, 0]
sf.write("audio_before.wav", audio_before, 22050, "PCM_16")
sf.write("audio_after.wav", audio_after, 22050, "PCM_16")
ipd.Audio('audio_after.wav')
```
#### Referencing FastSpeech2
```
@misc{ren2021fastspeech,
title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech},
author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu},
year={2021},
eprint={2006.04558},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |