skylord commited on
Commit
606f4d8
1 Parent(s): d8187f7

Add model files

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: hi
3
+ datasets:
4
+ - common_voice
5
+ - indic tts
6
+ - iiith
7
+ metrics:
8
+ - wer
9
+ tags:
10
+ - audio
11
+ - automatic-speech-recognition
12
+ - speech
13
+ - xlsr-fine-tuning-week
14
+ license: apache-2.0
15
+ model-index:
16
+ - name: Hindi XLSR Wav2Vec2 Large 53
17
+ results:
18
+ - task:
19
+ name: Speech Recognition
20
+ type: automatic-speech-recognition
21
+ dataset:
22
+ - name: Common Voice hi
23
+ type: common_voice
24
+ args: hi
25
+ - name: Indic IIT (IITM)
26
+ type: indic
27
+ args: hi
28
+ - name: IIITH Indic Dataset
29
+ type: iiith
30
+ args: hi
31
+ metrics:
32
+ - name: Test WER
33
+ type: wer
34
+ value: 19.05
35
+ ---
36
+
37
+ # Wav2Vec2-Large-XLSR-53-Hindi
38
+
39
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hindi using the following datasets:
40
+ - [Common Voice](https://huggingface.co/datasets/common_voice),
41
+ - [Indic TTS- IITM](https://www.iitm.ac.in/donlab/tts/index.php) and
42
+ - [IIITH - Indic Speech Datasets](http://speech.iiit.ac.in/index.php/research-svl/69.html)
43
+
44
+ The Hindi CommonVoice data is skewed towards male voices. However the other Indic datasets are well balanced.
45
+
46
+ Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Hindi dataset :: 30 epochs >> 19.05% WER
47
+ Resuming from checkpoints trained for another XX epochs >> XX.XX%
48
+
49
+ When using this model, make sure that your speech input is sampled at 16kHz.
50
+
51
+ ## Usage
52
+
53
+ The model can be used directly (without a language model) as follows:
54
+
55
+ ```python
56
+ import torch
57
+ import torchaudio
58
+ from datasets import load_dataset
59
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
60
+ test_dataset = load_dataset("common_voice", "hi", split="test")
61
+
62
+ processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
63
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
64
+
65
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
66
+
67
+ # Preprocessing the datasets.
68
+ # We need to read the aduio files as arrays
69
+ def speech_file_to_array_fn(batch):
70
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
71
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
72
+ return batch
73
+
74
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
75
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
76
+
77
+ with torch.no_grad():
78
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
79
+
80
+ predicted_ids = torch.argmax(logits, dim=-1)
81
+ print("Prediction:", processor.batch_decode(predicted_ids))
82
+ print("Reference:", test_dataset["sentence"][:2])
83
+ ```
84
+
85
+
86
+ ## Evaluation
87
+
88
+ The model can be evaluated as follows on the Hindi test data of Common Voice.
89
+
90
+
91
+ ```python
92
+ import torch
93
+ import torchaudio
94
+ from datasets import load_dataset, load_metric
95
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
96
+ import re
97
+
98
+ test_dataset = load_dataset("common_voice", "hi", split="test")
99
+ wer = load_metric("wer")
100
+
101
+ processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
102
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
103
+ model.to("cuda")
104
+
105
+ chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
106
+ unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
107
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
108
+
109
+ # Preprocessing the datasets.
110
+ # We need to read the aduio files as arrays
111
+
112
+ def speech_file_to_array_fn(batch):
113
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"])
114
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
115
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
116
+ return batch
117
+
118
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
119
+
120
+ # Preprocessing the datasets.
121
+ # We need to read the aduio files as arrays
122
+
123
+ def evaluate(batch):
124
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
125
+ with torch.no_grad():
126
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
127
+ pred_ids = torch.argmax(logits, dim=-1)
128
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
129
+ return batch
130
+
131
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
132
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
133
+ ```
134
+
135
+ **Test Result**: 19.056 %
136
+
137
+
138
+ ## Training
139
+
140
+ The Common Voice `train`, `validation`, datasets were used for training as well as
141
+
142
+ The script used for training & wandb dashboard can be found [here](https://wandb.ai/thinkevolve/huggingface/reports/Project-Hindi-XLSR-Large--Vmlldzo2MTI2MTQ)
README.md CHANGED
@@ -95,7 +95,7 @@ from datasets import load_dataset, load_metric
95
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
96
  import re
97
 
98
- test_dataset = load_dataset("common_voice", "el", split="test")
99
  wer = load_metric("wer")
100
 
101
  processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
@@ -103,7 +103,7 @@ model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
103
  model.to("cuda")
104
 
105
  chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
106
- unicode_ignore_regex = r'[dceMaWpmFui\xa0]' # Some unwanted unicode chars
107
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
108
 
109
  # Preprocessing the datasets.
 
95
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
96
  import re
97
 
98
+ test_dataset = load_dataset("common_voice", "hi", split="test")
99
  wer = load_metric("wer")
100
 
101
  processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
 
103
  model.to("cuda")
104
 
105
  chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
106
+ unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
107
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
108
 
109
  # Preprocessing the datasets.
config.json CHANGED
@@ -70,7 +70,7 @@
70
  "num_conv_pos_embeddings": 128,
71
  "num_feat_extract_layers": 7,
72
  "num_hidden_layers": 24,
73
- "pad_token_id": 74,
74
  "transformers_version": "4.5.0.dev0",
75
- "vocab_size": 75
76
  }
 
70
  "num_conv_pos_embeddings": 128,
71
  "num_feat_extract_layers": 7,
72
  "num_hidden_layers": 24,
73
+ "pad_token_id": 93,
74
  "transformers_version": "4.5.0.dev0",
75
+ "vocab_size": 94
76
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c42f7b3bd5cb4694035320ae412c100449b231a49a05e2194f8cc844646fc697
3
- size 1262241303
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be0fd803682bd53a5e10531a8c98d1d154a4303d6d03d28a07c09b221c0e324e
3
+ size 1262319255
vocab.json CHANGED
@@ -1 +1 @@
1
- {"प": 0, "": 1, "": 2, "": 3, "": 4, "": 5, "": 6, "": 7, "": 8, "": 9, "": 10, "": 11, "": 12, "": 13, "": 14, "": 15, "": 16, "": 17, "ि": 18, "": 19, "": 20, "": 21, "": 22, "": 23, "": 24, "": 25, "": 26, "": 27, "": 28, "": 29, "": 30, "": 31, "": 32, "": 33, "": 34, "": 35, "": 36, "": 37, "": 38, "": 39, "": 41, "": 42, "": 43, "": 44, "": 45, "": 46, "": 47, "": 48, "": 49, "": 50, "": 51, "": 52, "": 53, "": 54, "": 55, "": 56, "": 57, "": 58, "": 59, "": 60, "": 61, "": 62, "व": 63, "": 64, "": 65, "": 66, "": 67, "": 68, "": 69, "": 70, "": 71, "": 72, "/": 40, "[UNK]": 73, "[PAD]": 74}
 
1
+ {"प": 0, "": 1, "": 2, "": 3, "": 4, "": 5, "?": 6, "": 7, "": 8, "": 9, "": 10, "": 11, "u": 12, "": 13, "r": 14, "": 15, "": 16, "": 17, "": 18, "": 19, "": 20, "": 21, ".": 22, "": 23, "": 24, "": 25, "": 26, "": 27, "F": 28, "": 29, "": 30, "": 31, "ि": 32, "": 33, "!": 34, "": 35, "": 37, "": 38, "": 39, "": 40, "": 41, ",": 42, "": 43, "": 44, "": 45, "": 46, "": 47, "": 48, "": 49, "": 50, "": 51, "": 52, "": 53, "": 54, "": 55, "e": 56, "p": 57, "a": 58, "l": 59, "M": 60, "": 61, "": 62, "व": 63, "": 64, ":": 65, "\"": 66, "'": 67, "": 68, "": 69, "": 70, " ": 71, "": 72, "अ": 73, "क़": 74, "त": 75, "ु": 76, "औ": 77, "m": 78, "श": 79, "च": 80, "़": 81, "ए": 82, "ह": 83, "W": 84, "ग़": 85, "ठ": 86, "ज": 87, "-": 88, "i": 89, "ख़": 90, "म": 91, "/": 36, "[UNK]": 92, "[PAD]": 93}