anton-l HF staff commited on
Commit
5448dec
1 Parent(s): 37f255e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ tags:
6
+ - speech
7
+ ---
8
+
9
+ # UniSpeech-SAT-Large for Speaker Verification
10
+
11
+ [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
12
+
13
+ The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
14
+
15
+ The model was pre-trained on:
16
+
17
+ - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
18
+ - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
19
+ - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
20
+
21
+ [Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
22
+ AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
23
+
24
+ Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
25
+
26
+ **Abstract**
27
+ *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
28
+
29
+ The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
30
+
31
+ # Fine-tuning details
32
+
33
+ The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
34
+
35
+ [X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
36
+
37
+ # Usage
38
+
39
+ ## Speaker Verification
40
+
41
+ ```python
42
+ from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector
43
+ from datasets import load_dataset
44
+ import torch
45
+
46
+ dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
47
+
48
+ feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-large-sv')
49
+ model = UniSpeechSatForXVector.from_pretrained('microsoft/unispeech-sat-large-sv')
50
+
51
+ # audio files are decoded on the fly
52
+ inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
53
+ embeddings = model(**inputs).embeddings
54
+ embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
55
+
56
+ # the resulting embeddings can be used for cosine similarity-based retrieval
57
+ cosine_sim = torch.nn.CosineSimilarity(dim=-1)
58
+ similarity = cosine_sim(embeddings[0], embeddings[1])
59
+ threshold = 0.89 # the optimal threshold is dataset-dependent
60
+ if similarity < threshold:
61
+ print("Speakers are not the same!")
62
+ ```
63
+
64
+ # License
65
+
66
+ The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
67
+
68
+ ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/UniSpeechSAT.png)