osanseviero HF staff commited on
Commit
b2b287f
1 Parent(s): 345a296

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -0
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ - hf-asr-leaderboard
10
+ license: mit
11
+ pipeline_tag: automatic-speech-recognition
12
+ widget:
13
+ - example_title: Librispeech sample 1
14
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
15
+ - example_title: Librispeech sample 2
16
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
17
+ model-index:
18
+ - name: s2t-small-librispeech-asr
19
+ results:
20
+ - task:
21
+ name: Automatic Speech Recognition
22
+ type: automatic-speech-recognition
23
+ dataset:
24
+ name: LibriSpeech (clean)
25
+ type: librispeech_asr
26
+ config: clean
27
+ split: test
28
+ args:
29
+ language: en
30
+ metrics:
31
+ - name: Test WER
32
+ type: wer
33
+ value: 4.3
34
+ - task:
35
+ name: Automatic Speech Recognition
36
+ type: automatic-speech-recognition
37
+ dataset:
38
+ name: LibriSpeech (other)
39
+ type: librispeech_asr
40
+ config: other
41
+ split: test
42
+ args:
43
+ language: en
44
+ metrics:
45
+ - name: Test WER
46
+ type: wer
47
+ value: 9.0
48
+ ---
49
+
50
+
51
+ # S2T-SMALL-LIBRISPEECH-ASR
52
+
53
+ `s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
54
+ The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
55
+ [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
56
+
57
+
58
+ ## Model description
59
+
60
+ S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
61
+ autoregressive cross-entropy loss and generates the transcripts autoregressively.
62
+
63
+ ## Intended uses & limitations
64
+
65
+ This model can be used for end-to-end speech recognition (ASR).
66
+ See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
67
+
68
+
69
+ ### How to use
70
+
71
+ As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
72
+ transcripts by passing the speech features to the model.
73
+
74
+ *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
75
+ filter bank features. Make sure to install the `torchaudio` package before running this example.*
76
+
77
+ *Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
78
+ so be sure to install those packages before running the examples.*
79
+
80
+ You could either install those as extra speech dependancies with
81
+ `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
82
+ with `pip install torchaudio sentencepiece`.
83
+
84
+
85
+ ```python
86
+ import torch
87
+ from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
88
+ from datasets import load_dataset
89
+
90
+ model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
91
+ processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
92
+
93
+ ds = load_dataset(
94
+ "patrickvonplaten/librispeech_asr_dummy",
95
+ "clean",
96
+ split="validation"
97
+ )
98
+
99
+ input_features = processor(
100
+ ds[0]["audio"]["array"],
101
+ sampling_rate=16_000,
102
+ return_tensors="pt"
103
+ ).input_features # Batch size 1
104
+ generated_ids = model.generate(input_ids=input_features)
105
+
106
+ transcription = processor.batch_decode(generated_ids)
107
+ ```
108
+
109
+ #### Evaluation on LibriSpeech Test
110
+
111
+ The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
112
+ *"clean"* and *"other"* test dataset.
113
+
114
+ ```python
115
+ from datasets import load_dataset, load_metric
116
+ from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
117
+
118
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
119
+ wer = load_metric("wer")
120
+
121
+ model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
122
+ processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
123
+
124
+ librispeech_eval = librispeech_eval.map(map_to_array)
125
+
126
+ def map_to_pred(batch):
127
+ features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
128
+ input_features = features.input_features.to("cuda")
129
+ attention_mask = features.attention_mask.to("cuda")
130
+
131
+ gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
132
+ batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
133
+ return batch
134
+
135
+ result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
136
+
137
+ print("WER:", wer(predictions=result["transcription"], references=result["text"]))
138
+ ```
139
+
140
+ *Result (WER)*:
141
+
142
+ | "clean" | "other" |
143
+ |:-------:|:-------:|
144
+ | 4.3 | 9.0 |
145
+
146
+
147
+
148
+ ## Training data
149
+
150
+ The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
151
+ approximately 1000 hours of 16kHz read English speech.
152
+
153
+
154
+ ## Training procedure
155
+
156
+ ### Preprocessing
157
+
158
+ The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
159
+ WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
160
+ is applied to each example.
161
+
162
+ The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
163
+
164
+
165
+ ### Training
166
+
167
+ The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
168
+ The encoder receives speech features, and the decoder generates the transcripts autoregressively.
169
+
170
+
171
+ ### BibTeX entry and citation info
172
+
173
+ ```bibtex
174
+ @inproceedings{wang2020fairseqs2t,
175
+ title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
176
+ author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
177
+ booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
178
+ year = {2020},
179
+ }
180
+
181
+ ```