ttop324 commited on
Commit
9e975ee
1 Parent(s): 45ab74b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -133
README.md CHANGED
@@ -1,152 +1,50 @@
1
  ---
2
- language: ja
3
- datasets:
4
- - common_voice
5
- metrics:
6
- - wer
7
  tags:
8
- - audio
9
- - automatic-speech-recognition
10
- - speech
11
- - xlsr-fine-tuning-week
12
- license: apache-2.0
13
  model-index:
14
  - name: wav2vec2-live-japanese
15
- results:
16
- - task:
17
- name: Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Common Voice Japanese
21
- type: common_voice
22
- args: ja
23
- metrics:
24
- - name: Test WER
25
- type: wer
26
- value: 22.08%
27
- - name: Test CER
28
- type: cer
29
- value: 10.08%
30
  ---
31
 
32
- # wav2vec2-live-japanese
33
-
34
- https://github.com/ttop32/wav2vec2-live-japanese-translator
35
- Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese hiragana using the
36
- - [common_voice](https://huggingface.co/datasets/common_voice)
37
- - [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut)
38
- - [CSS10](https://github.com/Kyubyong/css10)
39
- - [TEDxJP-10K](https://github.com/laboroai/TEDxJP-10K)
40
- - [JVS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus)
41
-
42
- ## Inference
43
- ```python
44
-
45
- #usage
46
- import torch
47
- import torchaudio
48
- from datasets import load_dataset
49
- from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
50
-
51
-
52
- model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese")
53
- processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese")
54
- test_dataset = load_dataset("common_voice", "ja", split="test")
55
-
56
-
57
-
58
- # Preprocessing the datasets.
59
- # We need to read the aduio files as arrays
60
- def speech_file_to_array_fn(batch):
61
- speech_array, sampling_rate = torchaudio.load(batch["path"])
62
- batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy()
63
- return batch
64
-
65
-
66
- test_dataset = test_dataset.map(speech_file_to_array_fn)
67
- inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
68
-
69
- with torch.no_grad():
70
- logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
71
-
72
- predicted_ids = torch.argmax(logits, dim=-1)
73
-
74
- print("Prediction:", processor.batch_decode(predicted_ids))
75
- print("Reference:", test_dataset[:2]["sentence"])
76
- ```
77
 
78
- ## Evaluation
79
- ```python
80
-
81
-
82
- import torch
83
- import torchaudio
84
- from datasets import load_dataset, load_metric
85
- from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
86
- import re
87
- import pykakasi
88
- import MeCab
89
-
90
-
91
- wer = load_metric("wer")
92
- cer = load_metric("cer")
93
-
94
- model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese").to("cuda")
95
- processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese")
96
- test_dataset = load_dataset("common_voice", "ja", split="test")
97
-
98
-
99
- chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\�‘、。.!,・―─~「」『』\\\\※\[\]\{\}「」〇?…]'
100
- wakati = MeCab.Tagger("-Owakati")
101
- kakasi = pykakasi.kakasi()
102
- kakasi.setMode("J","H") # kanji to hiragana
103
- kakasi.setMode("K","H") # katakana to hiragana
104
- conv = kakasi.getConverter()
105
-
106
-
107
- FULLWIDTH_TO_HALFWIDTH = str.maketrans(
108
- ' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!゛#$%&()*+、ー。/:;〈=〉?@[]^_‘{|}~',
109
- ' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&()*+,-./:;<=>?@[]^_`{|}~',
110
- )
111
- def fullwidth_to_halfwidth(s):
112
- return s.translate(FULLWIDTH_TO_HALFWIDTH)
113
-
114
-
115
- def preprocessData(batch):
116
- batch["sentence"] = fullwidth_to_halfwidth(batch["sentence"])
117
- batch["sentence"] = re.sub(chars_to_ignore_regex,' ', batch["sentence"]).lower() #remove special char
118
- batch["sentence"] = wakati.parse(batch["sentence"]) #add space
119
- batch["sentence"] = conv.do(batch["sentence"]) #covert to hiragana
120
- batch["sentence"] = " ".join(batch["sentence"].split())+" " #remove multiple space
121
-
122
- speech_array, sampling_rate = torchaudio.load(batch["path"])
123
- batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy()
124
- return batch
125
-
126
-
127
- test_dataset = test_dataset.map(preprocessData)
128
 
 
129
 
 
130
 
131
- # Preprocessing the datasets.
132
- # We need to read the aduio files as arrays
133
- def evaluate(batch):
134
- inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
135
 
136
- with torch.no_grad():
137
- logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
138
 
139
- pred_ids = torch.argmax(logits, dim=-1)
140
- batch["pred_strings"] = processor.batch_decode(pred_ids)
141
- return batch
142
 
143
- result = test_dataset.map(evaluate, batched=True, batch_size=8)
144
 
145
- print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
146
- print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
147
 
 
148
 
149
- ```
150
 
 
 
 
 
 
 
 
 
 
 
 
 
151
 
 
152
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  tags:
3
+ - generated_from_trainer
 
 
 
 
4
  model-index:
5
  - name: wav2vec2-live-japanese
6
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
+ # wav2vec2-live-japanese
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ This model was trained from scratch on the None dataset.
15
 
16
+ ## Model description
17
 
18
+ More information needed
 
 
 
19
 
20
+ ## Intended uses & limitations
 
21
 
22
+ More information needed
 
 
23
 
24
+ ## Training and evaluation data
25
 
26
+ More information needed
 
27
 
28
+ ## Training procedure
29
 
30
+ ### Training hyperparameters
31
 
32
+ The following hyperparameters were used during training:
33
+ - learning_rate: 0.0003
34
+ - train_batch_size: 3
35
+ - eval_batch_size: 2
36
+ - seed: 42
37
+ - gradient_accumulation_steps: 2
38
+ - total_train_batch_size: 6
39
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
+ - lr_scheduler_type: linear
41
+ - lr_scheduler_warmup_steps: 500
42
+ - num_epochs: 50
43
+ - mixed_precision_training: Native AMP
44
 
45
+ ### Framework versions
46
 
47
+ - Transformers 4.11.2
48
+ - Pytorch 1.9.1
49
+ - Datasets 1.11.0
50
+ - Tokenizers 0.10.3