Commit
•
3576421
1
Parent(s):
a7cc694
Update README.md
Browse files
README.md
CHANGED
@@ -26,12 +26,29 @@ model-index:
|
|
26 |
library_name: transformers
|
27 |
---
|
28 |
|
29 |
-
|
30 |
|
31 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) fine-tuned on [IMASc dataset](https://www.kaggle.com/datasets/thennal/imasc).
|
32 |
|
|
|
|
|
33 |
IMaSC is a Malayalam text and speech corpus made available by ICFOSS for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
The fine-tuned model on evaluating in the following dataset:
|
36 |
|
37 |
**In Mozilla CommonVoice 11.0 dataset (Malayalam subset):**
|
@@ -47,16 +64,4 @@ WER - 70.49
|
|
47 |
CER - 17.0
|
48 |
|
49 |
|
50 |
-
## Training
|
51 |
-
|
52 |
-
[Script Used for training](https://github.com/kurianbenoy/Keyword_generator_project/blob/main/Whisper_IMASC_final_e2eofficerun.ipynb)
|
53 |
|
54 |
-
[Training run](https://wandb.ai/hello34/wandb_whisper_e2e/runs/q2xlvbw5)
|
55 |
-
|
56 |
-
[Experiment Tracking with Weights and Biases](https://wandb.ai/hello34/wandb_whisper_e2e)
|
57 |
-
|
58 |
-
- GPUs used: A100 and 80 GB
|
59 |
-
|
60 |
-
- Training Time: 16 hours
|
61 |
-
|
62 |
-
- This project was build with A100 80GB GPU provided by [E2E during their open hack day](https://www.eventbrite.com/e/open-hack-day-tickets-783582435157)
|
|
|
26 |
library_name: transformers
|
27 |
---
|
28 |
|
29 |
+
# Malwhisper-v1-small
|
30 |
|
31 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) fine-tuned on [IMASc dataset](https://www.kaggle.com/datasets/thennal/imasc).
|
32 |
|
33 |
+
## About Dataset
|
34 |
+
|
35 |
IMaSC is a Malayalam text and speech corpus made available by ICFOSS for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
|
36 |
|
37 |
+
## Training
|
38 |
+
|
39 |
+
[Script Used for training](https://github.com/kurianbenoy/Keyword_generator_project/blob/main/Whisper_IMASC_final_e2eofficerun.ipynb)
|
40 |
+
|
41 |
+
[Training run](https://wandb.ai/hello34/wandb_whisper_e2e/runs/q2xlvbw5)
|
42 |
+
|
43 |
+
[Experiment Tracking with Weights and Biases](https://wandb.ai/hello34/wandb_whisper_e2e)
|
44 |
+
|
45 |
+
- GPUs used: A100 and 80 GB
|
46 |
+
|
47 |
+
- Training Time: 16 hours
|
48 |
+
|
49 |
+
- This project was build with A100 80GB GPU provided by [E2E during their open hack day](https://www.eventbrite.com/e/open-hack-day-tickets-783582435157)
|
50 |
+
|
51 |
+
## Evaluation
|
52 |
The fine-tuned model on evaluating in the following dataset:
|
53 |
|
54 |
**In Mozilla CommonVoice 11.0 dataset (Malayalam subset):**
|
|
|
64 |
CER - 17.0
|
65 |
|
66 |
|
|
|
|
|
|
|
67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|