File size: 2,916 Bytes
b90d4af
95ecd0b
5b9fb17
 
 
 
 
 
 
 
 
 
 
 
b90d4af
5b9fb17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1a2c010
5b9fb17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---

language: en
datasets:
- common_voice
- librispeech_asr
- how2
- must-c-v1
- must-c-v2
- europarl
- tedlium
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---


# Fine-Tune Wav2Vec2 large model for English ASR


### Data for fine-tune

| Dataset      | Duration in hours |
|--------------|-------------------|
| Common Voice |              1667 |
| Europarl     |                85 |
| How2         |               356 |
| Librispeech  |               936 |
| MuST-C v1    |               407 |
| MuST-C v2    |               482 |
| Tedlium      |               482 |


### Evaluation result

| Dataset     | Duration in hours | WER w/o LM | WER with LM |
|-------------|-------------------|------------|-------------|
| Librispeech |        5.4        |     2.9    |     1.1     |
|   Tedlium   |        2.6        |     7.9    |     5.4     |


### Usage

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FAhtGvjRdHT4W0KeMdMMlL7sm6Hbe7dv?usp=sharing)

```python

from transformers.file_utils import cached_path, hf_bucket_url

from importlib.machinery import SourceFileLoader

from transformers import Wav2Vec2ProcessorWithLM

from IPython.lib.display import Audio

import torchaudio

import torch



# Load model & processor

model_name = "nguyenvulebinh/iwslt-asr-wav2vec-large-4500h"

model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)

processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)



# Load an example audio (16k)

audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="tst_2010_sample.wav")))

input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')



# Infer

output = model(**input_data)



# Output transcript without LM

print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))

# and of course there's teams that have a lot more tada structures and among the best are recent graduates of kindergarten



# Output transcript with LM

print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)

# and of course there are teams that have a lot more ta da structures and among the best are recent graduates of kindergarten

```

### Model Parameters License

The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode


### Contact 

nguyenvulebinh@gmail.com

[![Follow](https://img.shields.io/twitter/follow/nguyenvulebinh?style=social)](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)