Patrick von Platen commited on
Commit
cc9f5a4
1 Parent(s): 316f45c

add all files

Browse files
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ - hf-asr-leaderboard
10
+ license: apache-2.0
11
+ model-index:
12
+ - name: wav2vec2-conformer-rel-pos-large-960h-ft
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: Librispeech (clean)
19
+ type: librispeech_asr
20
+ args: en
21
+ metrics:
22
+ - name: Test WER
23
+ type: wer
24
+ value: 1.96
25
+ ---
26
+
27
+ # Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings
28
+
29
+ [Facebook's Wav2Vec2 Conformer (TODO-add link)]()
30
+
31
+ Wav2Vec2 Conformer with rotary position embeddings, pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
32
+
33
+ [Paper (TODO)](https://arxiv.org/abs/2006.11477)
34
+
35
+ Authors: ...
36
+
37
+ **Abstract**
38
+
39
+ ...
40
+
41
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
42
+
43
+
44
+ # Usage
45
+
46
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
47
+
48
+ ```python
49
+ from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
50
+ from datasets import load_dataset
51
+ import torch
52
+
53
+ # load model and processor
54
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
55
+ model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
56
+
57
+ # load dummy dataset and read soundfiles
58
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
59
+
60
+ # tokenize
61
+ input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
62
+
63
+ # retrieve logits
64
+ logits = model(input_values).logits
65
+
66
+ # take argmax and decode
67
+ predicted_ids = torch.argmax(logits, dim=-1)
68
+ transcription = processor.batch_decode(predicted_ids)
69
+ ```
70
+
71
+ ## Evaluation
72
+
73
+ This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rope-large-960h-ft** on LibriSpeech's "clean" and "other" test data.
74
+
75
+ ```python
76
+ from datasets import load_dataset
77
+ from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor
78
+ import torch
79
+ from jiwer import wer
80
+
81
+
82
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
83
+
84
+ model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft").to("cuda")
85
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
86
+
87
+ def map_to_pred(batch):
88
+ inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
89
+ input_values = inputs.input_values.to("cuda")
90
+ attention_mask = inputs.attention_mask.to("cuda")
91
+
92
+ with torch.no_grad():
93
+ logits = model(input_values, attention_mask=attention_mask).logits
94
+
95
+ predicted_ids = torch.argmax(logits, dim=-1)
96
+ transcription = processor.batch_decode(predicted_ids)
97
+ batch["transcription"] = transcription
98
+ return batch
99
+
100
+ result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
101
+
102
+ print("WER:", wer(result["text"], result["transcription"]))
103
+ ```
104
+
105
+ *Result (WER)*:
106
+
107
+ | "clean" | "other" |
108
+ |---|---|
109
+ | 1.96 | 3.98 |
alphabet.json ADDED
@@ -0,0 +1 @@
 
1
+ {"labels": ["", "<s>", "</s>", "\u2047", " ", "E", "T", "A", "O", "N", "I", "H", "S", "R", "D", "L", "U", "M", "W", "C", "F", "G", "Y", "P", "B", "V", "K", "'", "X", "J", "Q", "Z"], "is_bpe": false}
config.json ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.1,
3
+ "adapter_kernel_size": 3,
4
+ "adapter_stride": 2,
5
+ "add_adapter": false,
6
+ "apply_spec_augment": true,
7
+ "architectures": [
8
+ "Wav2Vec2ConformerForCTC"
9
+ ],
10
+ "attention_dropout": 0.1,
11
+ "bos_token_id": 1,
12
+ "classifier_proj_size": 256,
13
+ "codevector_dim": 768,
14
+ "conformer_conv_dropout": 0.1,
15
+ "contrastive_logits_temperature": 0.1,
16
+ "conv_bias": true,
17
+ "conv_depthwise_kernel_size": 31,
18
+ "conv_dim": [
19
+ 512,
20
+ 512,
21
+ 512,
22
+ 512,
23
+ 512,
24
+ 512,
25
+ 512
26
+ ],
27
+ "conv_kernel": [
28
+ 10,
29
+ 3,
30
+ 3,
31
+ 3,
32
+ 3,
33
+ 2,
34
+ 2
35
+ ],
36
+ "conv_stride": [
37
+ 5,
38
+ 2,
39
+ 2,
40
+ 2,
41
+ 2,
42
+ 2,
43
+ 2
44
+ ],
45
+ "ctc_loss_reduction": "sum",
46
+ "ctc_zero_infinity": false,
47
+ "diversity_loss_weight": 0.1,
48
+ "do_stable_layer_norm": true,
49
+ "eos_token_id": 2,
50
+ "feat_extract_activation": "gelu",
51
+ "feat_extract_dropout": 0.0,
52
+ "feat_extract_norm": "layer",
53
+ "feat_proj_dropout": 0.1,
54
+ "feat_quantizer_dropout": 0.0,
55
+ "final_dropout": 0.1,
56
+ "gradient_checkpointing": false,
57
+ "hidden_act": "swish",
58
+ "hidden_dropout": 0.1,
59
+ "hidden_dropout_prob": 0.1,
60
+ "hidden_size": 1024,
61
+ "initializer_range": 0.02,
62
+ "intermediate_size": 4096,
63
+ "layer_norm_eps": 1e-05,
64
+ "layerdrop": 0.0,
65
+ "mask_feature_length": 10,
66
+ "mask_feature_min_masks": 0,
67
+ "mask_feature_prob": 0.0,
68
+ "mask_time_length": 10,
69
+ "mask_time_min_masks": 2,
70
+ "mask_time_prob": 0.05,
71
+ "max_source_positions": 5000,
72
+ "model_type": "wav2vec2-conformer",
73
+ "num_adapter_layers": 3,
74
+ "num_attention_heads": 16,
75
+ "num_codevector_groups": 2,
76
+ "num_codevectors_per_group": 320,
77
+ "num_conv_pos_embedding_groups": 16,
78
+ "num_conv_pos_embeddings": 128,
79
+ "num_feat_extract_layers": 7,
80
+ "num_hidden_layers": 24,
81
+ "num_negatives": 100,
82
+ "output_hidden_size": 1024,
83
+ "pad_token_id": 0,
84
+ "position_embeddings_type": "rotary",
85
+ "proj_codevector_dim": 768,
86
+ "rotary_embedding_base": 10000,
87
+ "tdnn_dilation": [
88
+ 1,
89
+ 2,
90
+ 3,
91
+ 1,
92
+ 1
93
+ ],
94
+ "tdnn_dim": [
95
+ 512,
96
+ 512,
97
+ 512,
98
+ 512,
99
+ 1500
100
+ ],
101
+ "tdnn_kernel": [
102
+ 5,
103
+ 3,
104
+ 3,
105
+ 1,
106
+ 1
107
+ ],
108
+ "torch_dtype": "float32",
109
+ "transformers_version": "4.19.0.dev0",
110
+ "use_weighted_layer_sum": false,
111
+ "vocab_size": 32,
112
+ "xvector_output_dim": 512
113
+ }
language_model/4-gram.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e674d4a61df15bef37cd49183dc4fb087aaa3d7819d0ff8347068a880f033c61
3
+ size 3124591979
language_model/attrs.json ADDED
@@ -0,0 +1 @@
 
1
+ {"alpha": 0.5, "beta": 1.5, "unk_score_offset": -10.0, "score_boundary": true}
language_model/unigrams.txt ADDED
The diff for this file is too large to render. See raw diff
preprocessor_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_extractor_type": "Wav2Vec2FeatureExtractor",
4
+ "feature_size": 1,
5
+ "padding_side": "right",
6
+ "padding_value": 0,
7
+ "processor_class": "Wav2Vec2ProcessorWithLM",
8
+ "return_attention_mask": true,
9
+ "sampling_rate": 16000
10
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:665a23a077e25751f27d2580b7d3d222d00de5954b752b8661974f33dc005053
3
+ size 2373994447
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2Processor"}
vocab.json ADDED
@@ -0,0 +1 @@
 
1
+ {"<s>": 1, "<pad>": 0, "</s>": 2, "<unk>": 3, "|": 4, "E": 5, "T": 6, "A": 7, "O": 8, "N": 9, "I": 10, "H": 11, "S": 12, "R": 13, "D": 14, "L": 15, "U": 16, "M": 17, "W": 18, "C": 19, "F": 20, "G": 21, "Y": 22, "P": 23, "B": 24, "V": 25, "K": 26, "'": 27, "X": 28, "J": 29, "Q": 30, "Z": 31}