underdogliu1005 commited on
Commit
75be414
1 Parent(s): c7bd96a

Initiate the model repository

Browse files

Hope it contains all the file needed. Will do some sanity check on verification.

Files changed (8) hide show
  1. README.md +133 -1
  2. classifier.ckpt +3 -0
  3. config.json +3 -0
  4. embedding_model.ckpt +3 -0
  5. example1.wav +0 -0
  6. example2.flac +0 -0
  7. hyperparams.yaml +49 -0
  8. normalizer.ckpt +3 -0
README.md CHANGED
@@ -1,3 +1,135 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: "en"
3
+ thumbnail:
4
+ tags:
5
+ - speechbrain
6
+ - embeddings
7
+ - Speaker
8
+ - Verification
9
+ - Identification
10
+ - pytorch
11
+ - ResNet
12
+ - TDNN
13
+ license: "apache-2.0"
14
+ datasets:
15
+ - voxceleb
16
+ metrics:
17
+ - EER
18
+ widget:
19
+ - example_title: VoxCeleb Speaker id10003
20
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
21
+ - example_title: VoxCeleb Speaker id10004
22
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
23
  ---
24
+
25
+ <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
26
+ <br/><br/>
27
+
28
+ # Speaker Verification with ResNet embeddings on Voxceleb
29
+
30
+ This repository provides all the necessary tools to perform speaker verification with a pretrained ResNet TDNN model using SpeechBrain.
31
+ The system can be used to extract speaker embeddings as well.
32
+ It is trained on Voxceleb 1 + Voxceleb2 training data.
33
+
34
+ For a better experience, we encourage you to learn more about
35
+ [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
36
+
37
+ | Release | EER(%) | minDCF |
38
+ |:-------------:|:--------------:|:--------------:|
39
+ | 29-07-23 | 1.05 | 0.1082 |
40
+
41
+
42
+ ## Pipeline description
43
+
44
+ This system is composed of an ResNet TDNN model. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
45
+
46
+ ## Install SpeechBrain
47
+
48
+ First of all, please install SpeechBrain with the following command:
49
+
50
+ ```
51
+ pip install speechbrain
52
+ ```
53
+
54
+ Please notice that we encourage you to read our tutorials and learn more about
55
+ [SpeechBrain](https://speechbrain.github.io).
56
+
57
+ ### Compute your speaker embeddings
58
+
59
+ ```python
60
+ import torchaudio
61
+ from speechbrain.pretrained import EncoderClassifier
62
+ classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-resnet-voxceleb")
63
+ signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
64
+ embeddings = classifier.encode_batch(signal)
65
+ ```
66
+
67
+ ### Perform Speaker Verification
68
+
69
+ ```python
70
+ from speechbrain.pretrained import SpeakerRecognition
71
+ verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-resnet-voxceleb", savedir="pretrained_models/spkrec-resnet-voxceleb")
72
+ score, prediction = verification.verify_files("speechbrain/spkrec-resnet-voxceleb/example1.wav", "speechbrain/spkrec-resnet-voxceleb/example2.flac")
73
+ ```
74
+ The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
75
+
76
+ ### Inference on GPU
77
+ To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
78
+
79
+ ### Training
80
+ The model was trained with SpeechBrain (aa018540).
81
+ To train it from scratch follows these steps:
82
+ 1. Clone SpeechBrain:
83
+ ```bash
84
+ git clone https://github.com/speechbrain/speechbrain/
85
+ ```
86
+ 2. Install it:
87
+ ```
88
+ cd speechbrain
89
+ pip install -r requirements.txt
90
+ pip install -e .
91
+ ```
92
+
93
+ 3. Run Training:
94
+ ```
95
+ cd recipes/VoxCeleb/SpeakerRec
96
+ python train_speaker_embeddings.py hparams/train_resnet.yaml --data_folder=your_data_folder
97
+ ```
98
+
99
+ You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
100
+
101
+ ### Limitations
102
+ The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
103
+
104
+ #### Referencing ResNet TDNN
105
+ ```
106
+ @article{VILLALBA2020101026,
107
+ title = {State-of-the-art speaker recognition with neural network embeddings in NIST SRE18 and Speakers in the Wild evaluations},
108
+ journal = {Computer Speech & Language},
109
+ volume = {60},
110
+ pages = {101026},
111
+ year = {2020},
112
+ doi = {https://doi.org/10.1016/j.csl.2019.101026},
113
+ author = {Jesús Villalba and Nanxin Chen and David Snyder and Daniel Garcia-Romero and Alan McCree and Gregory Sell and Jonas Borgstrom and Leibny Paola García-Perera and Fred Richardson and Réda Dehak and Pedro A. Torres-Carrasquillo and Najim Dehak},
114
+ }
115
+ ```
116
+
117
+ # **Citing SpeechBrain**
118
+ Please, cite SpeechBrain if you use it for your research or business.
119
+
120
+ ```bibtex
121
+ @misc{speechbrain,
122
+ title={{SpeechBrain}: A General-Purpose Speech Toolkit},
123
+ author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
124
+ year={2021},
125
+ eprint={2106.04624},
126
+ archivePrefix={arXiv},
127
+ primaryClass={eess.AS},
128
+ note={arXiv:2106.04624}
129
+ }
130
+ ```
131
+
132
+ # **About SpeechBrain**
133
+ - Website: https://speechbrain.github.io/
134
+ - Code: https://github.com/speechbrain/speechbrain/
135
+ - HuggingFace: https://huggingface.co/speechbrain/
classifier.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2709a22f8ce811352743154a7a4b23fe22142e5f50f205225c10e8405630139f
3
+ size 7378804
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "speechbrain_interface": "SpeakerRecognition"
3
+ }
embedding_model.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:120a15c709add1679286fbf1e71d89ca4ed43981c189eecda60fd02343a00738
3
+ size 62001543
example1.wav ADDED
Binary file (104 kB). View file
 
example2.flac ADDED
Binary file (39.6 kB). View file
 
hyperparams.yaml ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ############################################################################
2
+ # Model: ResNet for Speaker verification
3
+ # ############################################################################
4
+
5
+ # Feature parameters
6
+ n_mels: 80
7
+
8
+ # Pretrain folder (HuggingFace)
9
+ pretrained_path: speechbrain/spkrec-ecapa-voxceleb
10
+
11
+ # Output parameters
12
+ out_n_neurons: 7205
13
+
14
+ # Model params
15
+ compute_features: !new:speechbrain.lobes.features.Fbank
16
+ n_mels: !ref <n_mels>
17
+
18
+ mean_var_norm: !new:speechbrain.processing.features.InputNormalization
19
+ norm_type: sentence
20
+ std_norm: False
21
+
22
+ embedding_model: !new:speechbrain.lobes.models.ResNet.ResNet
23
+ input_size: !ref <n_mels>
24
+ channels: [128, 128, 256, 256]
25
+ strides: [1, 2, 2, 2]
26
+ block_sizes: [3, 4, 6, 3]
27
+ lin_neurons: 256
28
+
29
+ classifier: !new:speechbrain.lobes.models.ECAPA_TDNN.Classifier
30
+ input_size: 256
31
+ out_neurons: !ref <out_n_neurons>
32
+
33
+ modules:
34
+ compute_features: !ref <compute_features>
35
+ mean_var_norm: !ref <mean_var_norm>
36
+ embedding_model: !ref <embedding_model>
37
+ classifier: !ref <classifier>
38
+
39
+ label_encoder: !new:speechbrain.dataio.encoder.CategoricalEncoder
40
+
41
+
42
+ pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
43
+ loadables:
44
+ embedding_model: !ref <embedding_model>
45
+ classifier: !ref <classifier>
46
+ paths:
47
+ embedding_model: !ref <pretrained_path>/embedding_model.ckpt
48
+ classifier: !ref <pretrained_path>/classifier.ckpt
49
+
normalizer.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b234fc610205075c0be3ede6ca63371c648832da72eb56403fbfc7d75311cf3
3
+ size 1139