binaya-s commited on
Commit
a6d7a8d
1 Parent(s): 66fd323

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - hf-asr-leaderboard
9
+ license: apache-2.0
10
+ widget:
11
+ - example_title: Librispeech sample 1
12
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
13
+ - example_title: Librispeech sample 2
14
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
15
+ model-index:
16
+ - name: wav2vec2-base-960h
17
+ results:
18
+ - task:
19
+ name: Automatic Speech Recognition
20
+ type: automatic-speech-recognition
21
+ dataset:
22
+ name: LibriSpeech (clean)
23
+ type: librispeech_asr
24
+ config: clean
25
+ split: test
26
+ args:
27
+ language: en
28
+ metrics:
29
+ - name: Test WER
30
+ type: wer
31
+ value: 3.4
32
+ - task:
33
+ name: Automatic Speech Recognition
34
+ type: automatic-speech-recognition
35
+ dataset:
36
+ name: LibriSpeech (other)
37
+ type: librispeech_asr
38
+ config: other
39
+ split: test
40
+ args:
41
+ language: en
42
+ metrics:
43
+ - name: Test WER
44
+ type: wer
45
+ value: 8.6
46
+ ---
47
+
48
+ # Wav2Vec2-Base-960h
49
+
50
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
51
+
52
+ The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
53
+ make sure that your speech input is also sampled at 16Khz.
54
+
55
+ [Paper](https://arxiv.org/abs/2006.11477)
56
+
57
+ Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
58
+
59
+ **Abstract**
60
+
61
+ We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
62
+
63
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
64
+
65
+
66
+ # Usage
67
+
68
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
69
+
70
+ ```python
71
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
72
+ from datasets import load_dataset
73
+ import soundfile as sf
74
+ import torch
75
+
76
+ # load model and tokenizer
77
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
78
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
79
+
80
+ # load dummy dataset and read soundfiles
81
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
82
+
83
+ # tokenize
84
+ input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
85
+
86
+ # retrieve logits
87
+ logits = model(input_values).logits
88
+
89
+ # take argmax and decode
90
+ predicted_ids = torch.argmax(logits, dim=-1)
91
+ transcription = processor.batch_decode(predicted_ids)
92
+ ```
93
+
94
+ ## Evaluation
95
+
96
+ This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
97
+
98
+ ```python
99
+ from datasets import load_dataset
100
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
101
+ import torch
102
+ from jiwer import wer
103
+
104
+
105
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
106
+
107
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
108
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
109
+
110
+ def map_to_pred(batch):
111
+ input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
112
+ with torch.no_grad():
113
+ logits = model(input_values.to("cuda")).logits
114
+
115
+ predicted_ids = torch.argmax(logits, dim=-1)
116
+ transcription = processor.batch_decode(predicted_ids)
117
+ batch["transcription"] = transcription
118
+ return batch
119
+
120
+ result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
121
+
122
+ print("WER:", wer(result["text"], result["transcription"]))
123
+ ```
124
+
125
+ *Result (WER)*:
126
+
127
+ | "clean" | "other" |
128
+ |---|---|
129
+ | 3.4 | 8.6 |