File size: 4,540 Bytes
8523ed6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50ef668
8523ed6
 
 
 
 
 
 
 
 
 
 
d104b3f
8523ed6
 
 
 
 
 
 
 
 
 
 
3a87303
8523ed6
3a87303
8523ed6
3a87303
8523ed6
3a87303
8523ed6
3a87303
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8523ed6
 
 
 
 
 
 
 
 
 
 
 
 
3a87303
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
language:
- ta
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Tamil Large-v2 - Vasista Sai Lodagala
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: google/fleurs
      type: google/fleurs
      config: ta_in
      split: test
    metrics:
    - type: wer
      value: 7.5
      name: WER
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: mozilla-foundation/common_voice_11_0
      type: mozilla-foundation/common_voice_11_0
      config: ta
      split: test
    metrics:
    - type: wer
      value: 6.61
      name: WER
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Tamil Large-v2

This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.

**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.

## Usage

In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.

The same repository also provides the scripts for faster inference using whisper-jax.

In order to infer a single audio file using this model, the following code snippet can be used:

```python
>>> import torch
>>> from transformers import pipeline

>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"

>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-large-v2", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")

>>> print('Transcription: ', transcribe(audio)["text"])
```

For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:

```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline

>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"

>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-large-v2", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")

>>> print('Transcription: ', transcribe(audio)["text"])
```

## Training and evaluation data

Training Data: 
  - [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
  - [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
  - [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
  - [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
  - [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
  - Babel ASR Corpus

Evaluation Data: 
  - [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
  - [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
  - [IISc-MILE Test Set](https://www.openslr.org/127/)
  - Babel Test Set

## Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 22000
- training_steps: 52500 (Initially set to 76000 steps)
- mixed_precision_training: True

## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).

The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.