anton-l HF staff commited on
Commit
1616a77
1 Parent(s): 89a16a5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ tags:
6
+ - speech
7
+ ---
8
+
9
+ # WavLM-Base for Speaker Verification
10
+
11
+ [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
12
+
13
+ The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
14
+
15
+ The model was pre-trained on 960h of [Librispeech](https://huggingface.co/datasets/librispeech_asr).
16
+
17
+ [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
18
+
19
+ Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
20
+
21
+ **Abstract**
22
+ *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
23
+
24
+ The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
25
+
26
+ # Fine-tuning details
27
+ The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
28
+ [X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
29
+
30
+ # Usage
31
+ ## Speaker Verification
32
+ ```python
33
+ from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector
34
+ from datasets import load_dataset
35
+ import torch
36
+
37
+ dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
38
+ feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-sv')
39
+ model = UniSpeechSatForXVector.from_pretrained('microsoft/wavlm-base-sv')
40
+
41
+ # audio files are decoded on the fly
42
+ inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
43
+ embeddings = model(**inputs).embeddings
44
+ embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
45
+
46
+ # the resulting embeddings can be used for cosine similarity-based retrieval
47
+ cosine_sim = torch.nn.CosineSimilarity(dim=-1)
48
+ similarity = cosine_sim(embeddings[0], embeddings[1])
49
+ threshold = 0.86 # the optimal threshold is dataset-dependent
50
+ if similarity < threshold:
51
+ print("Speakers are not the same!")
52
+ ```
53
+
54
+ # License
55
+ The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
56
+
57
+ ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)