gchhablani commited on
Commit
9af4df9
1 Parent(s): 99d6cfb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -27,7 +27,7 @@ model-index:
27
 
28
  # Wav2Vec2-Large-XLSR-53-Marathi
29
 
30
- Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marthi using the [OpenSLR SLR64](http://openslr.org/64/) dataset. Note that this data contains only female voices. Please keep this in mind before using the model for your task, although it works very well for male voice too. When using this model, make sure that your speech input is sampled at 16kHz.
31
 
32
  ## Usage
33
 
@@ -85,7 +85,7 @@ processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr
85
  model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
86
  model.to("cuda")
87
 
88
- chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\–\\…]'
89
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
90
 
91
  # Preprocessing the datasets.
27
 
28
  # Wav2Vec2-Large-XLSR-53-Marathi
29
 
30
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [OpenSLR SLR64](http://openslr.org/64/) dataset. Note that this data contains only female voices. Please keep this in mind before using the model for your task, although it works very well for male voice too. When using this model, make sure that your speech input is sampled at 16kHz.
31
 
32
  ## Usage
33
 
85
  model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
86
  model.to("cuda")
87
 
88
+ chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�\\\\–\\\\…]'
89
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
90
 
91
  # Preprocessing the datasets.