--- viewer: true --- # MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder ## Description: Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants. This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we introduce \textit{MultiMed}, a collection of small-to-large end-to-end ASR models for the medical domain, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese, together with the corresponding real-world ASR dataset. To our best knowledge, \textit{MultiMed} stands as the largest and the first multilingual medical ASR dataset, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes. Please cite this paper: https://arxiv.org/abs/2404.05659 @inproceedings{VietMed_dataset, title={VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain}, author={Khai Le-Duc}, year={2024}, booktitle = {Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, } To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed). For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing) ## Limitations: Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript. That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second. In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms. However, forced alignment only learns what it is taught by humans. Therefore, no transcript is perfect. We will conduct human-machine collaboration to get "more perfect" transcript in the next paper. ## Contact: If any links are broken, please contact me for fixing! Thanks [Phan Phuc](https://www.linkedin.com/in/pphuc/) for dataset viewer <3 ``` Le Duc Khai University of Toronto, Canada Email: duckhai.le@mail.utoronto.ca GitHub: https://github.com/leduckhai ```