chaanks commited on
Commit
ada410f
1 Parent(s): f68bba1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ inference: false
4
+ tags:
5
+ - Vocoder
6
+ - HiFIGAN
7
+ - speech-synthesis
8
+ - speechbrain
9
+ license: "apache-2.0"
10
+ datasets:
11
+ - LibriTTS
12
+ ---
13
+
14
+
15
+ <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
16
+ <br/><br/>
17
+
18
+ # Vocoder with HiFIGAN Unit trained on LibriTTS
19
+
20
+ This repository provides all the necessary tools for using a [scalable HiFiGAN Unit](https://arxiv.org/abs/2406.10735) vocoder trained with [LibriTTS](https://www.openslr.org/141/).
21
+
22
+ The pre-trained model take as input discrete self-supervised representations and produces a waveform as output. Typically, this model is utilized on top of a speech-to-unit translation model that converts an input utterance from a source language into a sequence of discrete speech units in a target language.
23
+ To generate the discrete self-supervised representations, we employ a K-means clustering model trained on HuBERT hidden layers, with `k=1000`.
24
+
25
+ ## Install SpeechBrain
26
+
27
+ First of all, please install tranformers and SpeechBrain with the following command:
28
+
29
+ ```
30
+ pip install speechbrain transformers
31
+ ```
32
+
33
+ Please notice that we encourage you to read our tutorials and learn more about
34
+ [SpeechBrain](https://speechbrain.github.io).
35
+
36
+
37
+ ### Using the Vocoder
38
+
39
+ ```python
40
+ import torch
41
+ from speechbrain.inference.vocoders import UnitHIFIGAN
42
+
43
+ hifi_gan_unit = UnitHIFIGAN.from_hparams(source="speechbrain/hifigan-hubert-l1-3-7-12-18-23-k1000-LibriTTS", savedir="pretrained_models/vocoder")
44
+ codes = torch.randint(0, 99, (100, 1))
45
+ waveform = hifi_gan_unit.decode_unit(codes)
46
+
47
+ ```
48
+
49
+
50
+ ### Inference on GPU
51
+ To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
52
+
53
+
54
+ ### Limitations
55
+ The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
56
+
57
+ #### Referencing SpeechBrain
58
+
59
+ ```
60
+ @misc{SB2021,
61
+ author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
62
+ title = {SpeechBrain},
63
+ year = {2021},
64
+ publisher = {GitHub},
65
+ journal = {GitHub repository},
66
+ howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
67
+ }
68
+ ```
69
+
70
+ #### About SpeechBrain
71
+ SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
72
+
73
+ Website: https://speechbrain.github.io/
74
+
75
+ GitHub: https://github.com/speechbrain/speechbrain