File size: 3,517 Bytes
ab20218
2a34599
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d8afd0d
2a34599
 
 
 
 
 
 
103ddf7
077cefb
2a34599
d8afd0d
2a34599
 
7219b38
2a34599
d8afd0d
077cefb
 
 
 
103ddf7
077cefb
7219b38
d8afd0d
077cefb
 
7219b38
077cefb
 
c5683bb
 
 
 
 
088c95f
c5683bb
d8afd0d
c5683bb
 
7219b38
c5683bb
d8afd0d
c5683bb
 
 
 
 
088c95f
c5683bb
d8afd0d
c5683bb
 
e0b963d
c5683bb
 
ab20218
2a34599
 
 
f6e3bda
baf1ba0
2a34599
 
389f555
2a34599
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef5d743
2a34599
 
ef5d743
2a34599
 
 
 
 
 
 
d8afd0d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
language:
- mt
library_name: nemo
datasets:
- common_voice
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- pytorch
- NeMo
- QuartzNet
- QuartzNet15x5
- maltese
license: cc-by-nc-sa-4.0
model-index:
- name: stt_mt_quartznet15x5_sp_ep255_64h
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Mozilla Common Voice 11.0 (Test)
      type: mozilla-foundation/common_voice_11_0
      split: test
      args:
        language: mt
    metrics:
    - name: WER
      type: wer
      value: 5
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Mozilla Common Voice 11.0 (Dev)
      type: mozilla-foundation/common_voice_11_0
      split: validation
      args:
        language: mt
    metrics:
    - name: WER
      type: wer
      value: 4.89
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: MASRI-TEST Corpus
      type: MLRS/masri_test
      split: test
      args:
        language: mt
    metrics:
    - name: WER
      type: wer
      value: 45.2
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: MASRI-DEV Corpus
      type: MLRS/masri_dev
      split: validation
      args:
        language: mt
    metrics:
    - name: WER
      type: wer
      value: 43.08
---

# stt_mt_quartznet15x5_sp_ep255_64h

**NOTE! This model was trained with the NeMo version: nemo-toolkit==1.10.0**

The "stt_mt_quartznet15x5_sp_ep255_64h" is an acoustic model created with NeMo which is suitable for Automatic Speech Recognition in Maltese. 

It is the result of fine-tuning the model ["QuartzNet15x5Base-En.nemo"](https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/files) with around 64 hours of Maltese data developed by the MASRI Project at the University of Malta between 2019 and 2021. The 64 hours of data were augmented using speed perturbation at speed rates of 0.9 and 1.1. Most of the data is available at the the MASRI Project homepage https://www.um.edu.mt/projects/masri/.

The specific list of corpora used to fine-tune the model is:

- MASRI-HEADSET v2 (6h39m)
- MASRI-Farfield (9h37m)
- MASRI-Booths (2h27m)
- MASRI-MEP (1h17m)
- MASRI-COMVO (7h29m)
- MASRI-TUBE (13h17m)
- MASRI-MERLIN (25h18m) *Not available at the MASRI Project homepage
	
The fine-tuning process was perform during October (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.

```bibtex
@misc{mena2022quartznet15x5maltese,
      title={Acoustic Model in Maltese: stt\_mt\_quartznet15x5\_sp\_ep255\_64h.}, 
      author={Hernandez Mena, Carlos Daniel},
      url={https://huggingface.co/carlosdanielhernandezmena/stt_mt_quartznet15x5_sp_ep255_64h},
      year={2022}
}
```

# Acknowledgements

The MASRI Project is funded by the University of Malta Research Fund Awards. We want to thank to Merlin Publishers (Malta) for provinding the audiobooks used to create the MASRI-MERLIN Corpus.

Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.