marma commited on
Commit
7446449
1 Parent(s): af6d443

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -28,4 +28,38 @@ model-index:
28
  ---
29
  # Wav2vec 2.0 large VoxRex Swedish
30
 
31
- ![WER during training](chart_1.svg "WER")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
  # Wav2vec 2.0 large VoxRex Swedish
30
 
31
+ Additionally pretrained and finetuned version of Facebooks [VoxPopuli-sv large](https://huggingface.co/facebook/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **3.40%**. WER for Common Voice test set is **10.72%** directly and **8.71%** with a 4-gram language model.
32
+
33
+ When using this model, make sure that your speech input is sampled at 16kHz.
34
+
35
+ ## Training
36
+ This model has additionally pretrained on 3500h of a mix of Swedish local radio broadcasts, audio books and other audio sources. It has been fine-tuned for 120000 updates on NST + CommonVoice<!-- and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]-->.
37
+
38
+ ![WER during training](chart_1.svg "WER")
39
+
40
+ ## Usage
41
+ The model can be used directly (without a language model) as follows:
42
+ ```python
43
+ import torch
44
+ import torchaudio
45
+ from datasets import load_dataset
46
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
47
+ test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
48
+ processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
49
+ model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
50
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
51
+ # Preprocessing the datasets.
52
+ # We need to read the aduio files as arrays
53
+ def speech_file_to_array_fn(batch):
54
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
55
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
56
+ return batch
57
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
58
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
59
+ with torch.no_grad():
60
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
61
+ predicted_ids = torch.argmax(logits, dim=-1)
62
+ print("Prediction:", processor.batch_decode(predicted_ids))
63
+ print("Reference:", test_dataset["sentence"][:2])
64
+ ```
65
+