speechbrainteam's picture
Update README.md
42d8be4
metadata
language: en
thumbnail: null
tags:
  - embeddings
  - Speaker
  - Verification
  - Identification
  - pytorch
  - ECAPA
  - TDNN
license: apache-2.0
datasets:
  - voxceleb
metrics:
  - EER
  - min_dct

Speaker Verification with ECAPA-TDNN embeddings on Voxceleb

This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. It is trained on Voxceleb 1+ Voxceleb2 training data.

For a better experience, we encourage you to learn more about SpeechBrain. The given ASR model performance on Voxceleb1-test set are:

Release EER(%) minDCF
05-03-21 0.69 0.08258

Pipeline description

This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.

Install SpeechBrain

First of all, please install SpeechBrain with the following command:

pip install speechbrain

Please notice that we encourage you to read our tutorials and learn more about SpeechBrain.

Compute your speaker embeddings

import torchaudio
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = verification.encode_batch(signal)

Perform Speaker Verification

import torchaudio
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", savedir="pretrained_models/spkrec-ecapa-voxceleb)
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
signal2, fs = torchaudio.load('samples/audio_samples/example2.flac')
score, prediction = verification.verify_batch(signal, signal2)

The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.

Referencing ECAPA-TDNN

@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
  author    = {Brecht Desplanques and
               Jenthe Thienpondt and
               Kris Demuynck},
  editor    = {Helen Meng and
               Bo Xu and
               Thomas Fang Zheng},
  title     = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
               in {TDNN} Based Speaker Verification},
  booktitle = {Interspeech 2020},
  pages     = {3830--3834},
  publisher = {{ISCA}},
  year      = {2020},
}

Referencing SpeechBrain

@misc{SB2021,
    author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
    title = {SpeechBrain},
    year = {2021},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/speechbrain/speechbrain}},
  }