bond005 commited on
Commit
b3a0f7d
·
1 Parent(s): c386485

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -13,4 +13,76 @@ widget:
13
 
14
  # ruT5-ASR
15
 
16
- Model was trained by [bond005](https://research.nsu.ru/en/persons/ibondarenko) to correct errors in the ASR output. The model is based on [ruT5-base](https://huggingface.co/ai-forever/ruT5-base)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  # ruT5-ASR
15
 
16
+ Model was trained by [bond005](https://research.nsu.ru/en/persons/ibondarenko) to correct errors in the ASR output (in particular, output of [Wav2Vec2-Large-Ru-Golos](https://huggingface.co/bond005/wav2vec2-large-ru-golos)). The model is based on [ruT5-base](https://huggingface.co/ai-forever/ruT5-base).
17
+
18
+ ## Usage
19
+
20
+ To correct ASR outputs the model can be used as a standalone sequence-to-sequence model as follows:
21
+
22
+ ```python
23
+ from transformers import T5ForConditionalGeneration, T5Tokenizer
24
+ import torch
25
+
26
+
27
+ def rescore(text: str, tokenizer: T5Tokenizer,
28
+ model: T5ForConditionalGeneration) -> str:
29
+ if len(text) == 0: # if an input text is empty, then we return an empty text too
30
+ return ''
31
+ ru_letters = set('аоуыэяеёюибвгдйжзклмнпрстфхцчшщьъ')
32
+ punct = set('.,:/\\?!()[]{};"\'-')
33
+ x = tokenizer(text, return_tensors='pt', padding=True).to(model.device)
34
+ max_size = int(x.input_ids.shape[1] * 1.5 + 10)
35
+ min_size = 3
36
+ if x.input_ids.shape[1] <= min_size:
37
+ return text # we don't rescore a very short text
38
+ out = model.generate(**x, do_sample=False, num_beams=5,
39
+ max_length=max_size, min_length=min_size)
40
+ res = tokenizer.decode(out[0], skip_special_tokens=True).lower().strip()
41
+ res = ' '.join(res.split())
42
+ postprocessed = ''
43
+ for cur in res:
44
+ if cur.isspace() or (cur in punct):
45
+ postprocessed += ' '
46
+ elif cur in ru_letters:
47
+ postprocessed += cur
48
+ return (' '.join(postprocessed.strip().split())).replace('ё', 'е')
49
+
50
+
51
+ # load model and tokenizer
52
+ tokenizer_for_rescoring = T5Tokenizer.from_pretrained('bond005/ruT5-ASR')
53
+ model_for_rescoring = T5ForConditionalGeneration.from_pretrained('bond005/ruT5-ASR')
54
+ if torch.cuda.is_available():
55
+ model_for_rescoring = model_for_rescoring.cuda()
56
+
57
+ input_examples = [
58
+ 'уласны в москве интерне только в большом году что лепровели',
59
+ 'мороз и солнце день чудесный',
60
+ 'нейро сети эта харошо',
61
+ 'да'
62
+ ]
63
+
64
+ for src in input_examples:
65
+ rescored = rescore(src, tokenizer_for_rescoring, model_for_rescoring)
66
+ print(f'{src} -> {rescored}')
67
+ ```
68
+
69
+ ```text
70
+ уласны в москве интерне только в большом году что лепровели -> у нас в москве интернет только в прошлом году что ли провели
71
+ мороз и солнце день чудесный -> мороз и солнце день чудесный
72
+ нейро сети эта харошо -> нейросети это хорошо
73
+ да -> да
74
+ ```
75
+
76
+ ## Evaluation
77
+ This model was evaluated on the test subsets of [SberDevices Golos](https://huggingface.co/datasets/SberDevices/Golos), [Common Voice 6.0](https://huggingface.co/datasets/common_voice) (Russian part), and [Russian Librispeech](https://huggingface.co/datasets/bond005/rulibrispeech), but it was trained on the training subset of SberDevices Golos only. You can see the evaluation script on other datasets, including Russian Librispeech and SOVA RuDevices, on my Kaggle web-page https://www.kaggle.com/code/bond005/wav2vec2-t5-ru-eval
78
+
79
+ *Comparison with "pure" Wav2Vec2-Large-Ru-Golos (WER, %)*:
80
+
81
+ | dataset name | pure ASR | ASR with rescoring |
82
+ |---------------------|----------|--------------------|
83
+ | Voxforge Ru | **27.08** | 40.48 |
84
+ | Russian LibriSpeech | **21.87** | 23.77 |
85
+ | Sova RuDevices | 25.41 | **20.13** |
86
+ | Golos Crowd | 10.14 | **9.42** |
87
+ | Golos Farfield | 20.35 | **17.99** |
88
+ | CommonVoice Ru | 18.55 | **11.60** |