Overview
This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on code-switched Tunisian arabic dialect. This model utilizes a code_switching approach and can process English , French and Tunisian Arabic
Performance
The performance of the model on our TunSwitch CS released dataset is summarized below :
Dataset | WER (%) | CER (%) |
---|---|---|
TunSwitch CS | 29.47 | 12.44 |
More details about the test sets, and the conditions leading to this performance in the paper.
Pipeline
The architecture comprises three components:
- French ASR pretrained with wav2vec2 on french corporas
- English ASR pretrained with wav2vec2 on english corporas
- Custom Tunisian ASR pretrained using wav2vec on a tunisian arabic corpora All three models will process the audio data. Subsequently, the resulting posteriorgrams will be combined and utilized as input for the Mixer, which will produce the final posteriorgrams.
Dataset
Part of the audio and text data (The ones we collected) used to train and test the model has been provided to encourage and support research within the community. Please find the dataset here. This Zenodo record contains labeled and unlabeled Tunisian Arabic audio data, along with textual data for language modelling. The folder also contains a 4-gram language model trained with KenLM on data released within the Zenodo record. The .arpa file is called "outdomain.arpa".
Team
Here are the team members who have contributed to this project
Paper
More in-depth details and insights are available in a released preprint. Please find the paper here. If you use or refer to this model, please cite :
@misc{abdallah2023leveraging,
title={Leveraging Data Collection and Unsupervised Learning for Code-switched Tunisian Arabic Automatic Speech Recognition},
author={Ahmed Amine Ben Abdallah and Ata Kabboudi and Amir Kanoun and Salah Zaiem},
year={2023},
eprint={2309.11327},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
Demo
Here is a working live demo : LINK
Inference
Please refer to the space demo for proper easy-to-use inference code.
Contact :
If you have questions, you can send an email to : zaiemsalah@gmail.com