patrickvonplaten commited on
Commit
78add66
1 Parent(s): c124da2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ license: apache-2.0
10
+ ---
11
+
12
+ # Wav2Vec2-Large-960h-Lv60 + Self-Training
13
+
14
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
15
+
16
+ The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
17
+
18
+ [Paper](https://arxiv.org/abs/2006.11477)
19
+
20
+ Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
21
+
22
+ **Abstract**
23
+
24
+ We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
25
+
26
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
27
+
28
+
29
+ # Usage
30
+
31
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
32
+
33
+ ```python
34
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
35
+ from datasets import load_dataset
36
+ import soundfile as sf
37
+ import torch
38
+
39
+ # load model and processor
40
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
41
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
42
+
43
+ # define function to read in sound file
44
+ def map_to_array(batch):
45
+ speech, _ = sf.read(batch["file"])
46
+ batch["speech"] = speech
47
+ return batch
48
+
49
+ # load dummy dataset and read soundfiles
50
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
51
+ ds = ds.map(map_to_array)
52
+
53
+ # tokenize
54
+ input_values = processor(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
55
+
56
+ # retrieve logits
57
+ logits = model(input_values).logits
58
+
59
+ # take argmax and decode
60
+ predicted_ids = torch.argmax(logits, dim=-1)
61
+ transcription = processor.batch_decode(predicted_ids)
62
+ ```
63
+
64
+ ## Evaluation
65
+
66
+ This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
71
+ import soundfile as sf
72
+ import torch
73
+ from jiwer import wer
74
+
75
+
76
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
77
+
78
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
79
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
80
+
81
+ def map_to_array(batch):
82
+ speech, _ = sf.read(batch["file"])
83
+ batch["speech"] = speech
84
+ return batch
85
+
86
+ librispeech_eval = librispeech_eval.map(map_to_array)
87
+
88
+ def map_to_pred(batch):
89
+ inputs = processor(batch["speech"], return_tensors="pt", padding="longest")
90
+ input_values = inputs.input_values.to("cuda")
91
+ attention_mask = inputs.attention_mask.to("cuda")
92
+
93
+ with torch.no_grad():
94
+ logits = model(input_values, attention_mask=attention_mask).logits
95
+
96
+ predicted_ids = torch.argmax(logits, dim=-1)
97
+ transcription = processor.batch_decode(predicted_ids)
98
+ batch["transcription"] = transcription
99
+ return batch
100
+
101
+ result = librispeech_eval.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"])
102
+
103
+ print("WER:", wer(result["text"], result["transcription"]))
104
+ ```
105
+
106
+ *Result (WER)*:
107
+
108
+ | "clean" | "other" |
109
+ |---|---|
110
+ | 1.9 | 3.9 |