marma commited on
Commit
717c877
2 Parent(s): 62b52f9 39f1a23

Merge branch 'main' of https://huggingface.co/KBLab/wav2vec2-large-xlsr-53-swedish into main

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -93,7 +93,7 @@ processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swed
93
  model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
94
  model.to("cuda")
95
 
96
- chars_to_ignore_regex = '[,?.!\\\\\\\\\\\\\\\\-;:"“]'
97
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
98
 
99
  # Preprocessing the datasets.
@@ -131,4 +131,4 @@ print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) f
131
 
132
  ## Training
133
 
134
- The [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/) was used for training. The [Fairseq](https://github.com/fairseq) scripts were used.
 
93
  model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
94
  model.to("cuda")
95
 
96
+ chars_to_ignore_regex = '[,?.!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-;:"“]'
97
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
98
 
99
  # Preprocessing the datasets.
 
131
 
132
  ## Training
133
 
134
+ First the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/) was used for fine tuning as well as [Common Voice](https://commonvoice.mozilla.org/en/datasets). Lastly only Common Voice dataset was used for final finetuning. The [Fairseq](https://github.com/fairseq) scripts were used.