patrickvonplaten commited on
Commit
82cd015
2 Parent(s): 5e77704 37c0b3a

Merge branch 'main' of https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h into main

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - libri_light
5
+ - common_voice
6
+ - switchboard
7
+ - fisher
8
+ tags:
9
+ - speech
10
+ - audio
11
+ - automatic-speech-recognition
12
+ widget:
13
+ - label: Librispeech sample 1
14
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
15
+ - label: Librispeech sample 2
16
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
17
+ license: apache-2.0
18
+ ---
19
+
20
+ # Wav2Vec2-Large-Robust finetuned on Switchboard
21
+
22
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/).
23
+
24
+ This model is a fine-tuned version of the [wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) model.
25
+ It has been pretrained on:
26
+
27
+ - [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
28
+ - [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
29
+ - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
30
+ - [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
31
+
32
+ and subsequently been finetuned on 300 hours of
33
+
34
+ - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
35
+
36
+ When using the model make sure that your speech input is also sampled at 16Khz.
37
+
38
+ [Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
39
+
40
+ Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
41
+
42
+ **Abstract**
43
+ Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
44
+
45
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
46
+
47
+ # Usage
48
+
49
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
50
+
51
+ ```python
52
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
53
+ from datasets import load_dataset
54
+ import soundfile as sf
55
+ import torch
56
+
57
+ # load model and processor
58
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-robust-ft-swbd-300h")
59
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-robust-ft-swbd-300h")
60
+
61
+ # define function to read in sound file
62
+ def map_to_array(batch):
63
+ speech, _ = sf.read(batch["file"])
64
+ batch["speech"] = speech
65
+ return batch
66
+
67
+ # load dummy dataset and read soundfiles
68
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
69
+ ds = ds.map(map_to_array)
70
+
71
+ # tokenize
72
+ input_values = processor(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
73
+
74
+ # retrieve logits
75
+ logits = model(input_values).logits
76
+
77
+ # take argmax and decode
78
+ predicted_ids = torch.argmax(logits, dim=-1)
79
+ transcription = processor.batch_decode(predicted_ids)
80
+ ```