Vikas Bhandari commited on
Commit
1e01745
1 Parent(s): f78d3bb
.gitattributes CHANGED
@@ -1,27 +1,17 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
  *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
  *.h5 filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
8
  *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
  *.model filter=lfs diff=lfs merge=lfs -text
11
  *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
  *.pb filter=lfs diff=lfs merge=lfs -text
16
  *.pt filter=lfs diff=lfs merge=lfs -text
17
  *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
  *.bin filter=lfs diff=lfs merge=lfs -text
 
 
 
4
  *.h5 filter=lfs diff=lfs merge=lfs -text
5
+ *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ *.arrow filter=lfs diff=lfs merge=lfs -text
10
+ *.ftz filter=lfs diff=lfs merge=lfs -text
11
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
12
  *.model filter=lfs diff=lfs merge=lfs -text
13
  *.msgpack filter=lfs diff=lfs merge=lfs -text
 
 
 
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
17
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,3 +1,126 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ - hf-asr-leaderboard
10
  license: apache-2.0
11
+ model-index:
12
+ - name: wav2vec2-large-960h-lv60
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: LibriSpeech (clean)
19
+ type: librispeech_asr
20
+ config: clean
21
+ split: test
22
+ args:
23
+ language: en
24
+ metrics:
25
+ - name: Test WER
26
+ type: wer
27
+ value: 1.9
28
+ - task:
29
+ name: Automatic Speech Recognition
30
+ type: automatic-speech-recognition
31
+ dataset:
32
+ name: LibriSpeech (other)
33
+ type: librispeech_asr
34
+ config: other
35
+ split: test
36
+ args:
37
+ language: en
38
+ metrics:
39
+ - name: Test WER
40
+ type: wer
41
+ value: 3.9
42
  ---
43
+
44
+ # Wav2Vec2-Large-960h-Lv60 + Self-Training
45
+
46
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
47
+
48
+ The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
49
+
50
+ [Paper](https://arxiv.org/abs/2006.11477)
51
+
52
+ Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
53
+
54
+ **Abstract**
55
+
56
+ We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
57
+
58
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
59
+
60
+
61
+ # Usage
62
+
63
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
64
+
65
+ ```python
66
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
67
+ from datasets import load_dataset
68
+ import torch
69
+
70
+ # load model and processor
71
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
72
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
73
+
74
+ # load dummy dataset and read soundfiles
75
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
76
+
77
+ # tokenize
78
+ input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
79
+
80
+ # retrieve logits
81
+ logits = model(input_values).logits
82
+
83
+ # take argmax and decode
84
+ predicted_ids = torch.argmax(logits, dim=-1)
85
+ transcription = processor.batch_decode(predicted_ids)
86
+ ```
87
+
88
+ ## Evaluation
89
+
90
+ This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
91
+
92
+ ```python
93
+ from datasets import load_dataset
94
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
95
+ import torch
96
+ from jiwer import wer
97
+
98
+
99
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
100
+
101
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
102
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
103
+
104
+ def map_to_pred(batch):
105
+ inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
106
+ input_values = inputs.input_values.to("cuda")
107
+ attention_mask = inputs.attention_mask.to("cuda")
108
+
109
+ with torch.no_grad():
110
+ logits = model(input_values, attention_mask=attention_mask).logits
111
+
112
+ predicted_ids = torch.argmax(logits, dim=-1)
113
+ transcription = processor.batch_decode(predicted_ids)
114
+ batch["transcription"] = transcription
115
+ return batch
116
+
117
+ result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
118
+
119
+ print("WER:", wer(result["text"], result["transcription"]))
120
+ ```
121
+
122
+ *Result (WER)*:
123
+
124
+ | "clean" | "other" |
125
+ |---|---|
126
+ | 1.9 | 3.9 |
config.json ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "wav2vec2-large-960h-lv60-self",
3
+ "activation_dropout": 0.1,
4
+ "adapter_kernel_size": 3,
5
+ "adapter_stride": 2,
6
+ "add_adapter": false,
7
+ "apply_spec_augment": true,
8
+ "architectures": [
9
+ "Wav2Vec2ForCTC"
10
+ ],
11
+ "attention_dropout": 0.1,
12
+ "bos_token_id": 1,
13
+ "classifier_proj_size": 256,
14
+ "codevector_dim": 256,
15
+ "contrastive_logits_temperature": 0.1,
16
+ "conv_bias": true,
17
+ "conv_dim": [
18
+ 512,
19
+ 512,
20
+ 512,
21
+ 512,
22
+ 512,
23
+ 512,
24
+ 512
25
+ ],
26
+ "conv_kernel": [
27
+ 10,
28
+ 3,
29
+ 3,
30
+ 3,
31
+ 3,
32
+ 2,
33
+ 2
34
+ ],
35
+ "conv_stride": [
36
+ 5,
37
+ 2,
38
+ 2,
39
+ 2,
40
+ 2,
41
+ 2,
42
+ 2
43
+ ],
44
+ "ctc_loss_reduction": "mean",
45
+ "ctc_zero_infinity": false,
46
+ "diversity_loss_weight": 0.1,
47
+ "do_stable_layer_norm": true,
48
+ "eos_token_id": 2,
49
+ "feat_extract_activation": "gelu",
50
+ "feat_extract_dropout": 0.0,
51
+ "feat_extract_norm": "layer",
52
+ "feat_proj_dropout": 0.1,
53
+ "feat_quantizer_dropout": 0.0,
54
+ "final_dropout": 0.1,
55
+ "gradient_checkpointing": false,
56
+ "hidden_act": "gelu",
57
+ "hidden_dropout": 0.1,
58
+ "hidden_dropout_prob": 0.1,
59
+ "hidden_size": 1024,
60
+ "initializer_range": 0.02,
61
+ "intermediate_size": 4096,
62
+ "layer_norm_eps": 1e-05,
63
+ "layerdrop": 0.1,
64
+ "mask_feature_length": 10,
65
+ "mask_feature_min_masks": 0,
66
+ "mask_feature_prob": 0.0,
67
+ "mask_time_length": 10,
68
+ "mask_time_min_masks": 2,
69
+ "mask_time_prob": 0.05,
70
+ "model_type": "wav2vec2",
71
+ "num_adapter_layers": 3,
72
+ "num_attention_heads": 16,
73
+ "num_codevector_groups": 2,
74
+ "num_codevectors_per_group": 320,
75
+ "num_conv_pos_embedding_groups": 16,
76
+ "num_conv_pos_embeddings": 128,
77
+ "num_feat_extract_layers": 7,
78
+ "num_hidden_layers": 24,
79
+ "num_negatives": 100,
80
+ "output_hidden_size": 1024,
81
+ "pad_token_id": 0,
82
+ "proj_codevector_dim": 256,
83
+ "tdnn_dilation": [
84
+ 1,
85
+ 2,
86
+ 3,
87
+ 1,
88
+ 1
89
+ ],
90
+ "tdnn_dim": [
91
+ 512,
92
+ 512,
93
+ 512,
94
+ 512,
95
+ 1500
96
+ ],
97
+ "tdnn_kernel": [
98
+ 5,
99
+ 3,
100
+ 3,
101
+ 1,
102
+ 1
103
+ ],
104
+ "torch_dtype": "float32",
105
+ "transformers_version": "4.20.1",
106
+ "use_weighted_layer_sum": false,
107
+ "vocab_size": 32,
108
+ "xvector_output_dim": 512
109
+ }
feature_extractor_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_dim": 1,
4
+ "padding_side": "right",
5
+ "padding_value": 0.0,
6
+ "return_attention_mask": true,
7
+ "sampling_rate": 16000
8
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90568e6185400541adead27c34d550df8fde3d35515c314fae28eaabbfe166a1
3
+ size 1261901472
preprocessor_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_extractor_type": "Wav2Vec2FeatureExtractor",
4
+ "feature_size": 1,
5
+ "padding_side": "right",
6
+ "padding_value": 0.0,
7
+ "return_attention_mask": true,
8
+ "sampling_rate": 16000
9
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a610bdeae041c42e8c80fa6334e8e73a236b134a5dfc1c2abcd6371f8f6381e
3
+ size 1262024049
runs/Jul12_11-22-39_AS-PF1W8A2S-L/1657605160.2763412/events.out.tfevents.1657605160.AS-PF1W8A2S-L.23380.1 ADDED
Binary file (5.36 kB). View file
runs/Jul12_11-22-39_AS-PF1W8A2S-L/events.out.tfevents.1657605160.AS-PF1W8A2S-L.23380.0 ADDED
Binary file (5.41 kB). View file
runs/Jul12_11-37-37_AS-PF1W8A2S-L/1657606058.3591454/events.out.tfevents.1657606058.AS-PF1W8A2S-L.23380.3 ADDED
Binary file (5.36 kB). View file
runs/Jul12_11-37-37_AS-PF1W8A2S-L/events.out.tfevents.1657606058.AS-PF1W8A2S-L.23380.2 ADDED
Binary file (5.44 kB). View file
runs/Jul12_11-46-05_AS-PF1W8A2S-L/1657606566.9514592/events.out.tfevents.1657606566.AS-PF1W8A2S-L.23380.5 ADDED
Binary file (5.36 kB). View file
runs/Jul12_11-46-05_AS-PF1W8A2S-L/events.out.tfevents.1657606566.AS-PF1W8A2S-L.23380.4 ADDED
Binary file (5.44 kB). View file
runs/Jul12_13-36-04_AS-PF1W8A2S-L/1657613165.099182/events.out.tfevents.1657613165.AS-PF1W8A2S-L.23380.7 ADDED
Binary file (5.36 kB). View file
runs/Jul12_13-36-04_AS-PF1W8A2S-L/events.out.tfevents.1657613165.AS-PF1W8A2S-L.23380.6 ADDED
Binary file (5.44 kB). View file
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924217bb609535355134d3da00d37c747177c1366f8a5f296bb4822942cb6add
3
+ size 1262396960
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": false, "return_attention_mask": true, "do_normalize": true}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aeadf0b0fdd30a9df8cc8916a4d6446651a5c464b283ca3dfd0315d7807bdf9
3
+ size 3311
vocab.json ADDED
@@ -0,0 +1 @@
 
1
+ {"<pad>": 0, "<s>": 1, "</s>": 2, "<unk>": 3, "|": 4, "E": 5, "T": 6, "A": 7, "O": 8, "N": 9, "I": 10, "H": 11, "S": 12, "R": 13, "D": 14, "L": 15, "U": 16, "M": 17, "W": 18, "C": 19, "F": 20, "G": 21, "Y": 22, "P": 23, "B": 24, "V": 25, "K": 26, "'": 27, "X": 28, "J": 29, "Q": 30, "Z": 31}