Datasets:
metadata
license: mit
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- medical
- asr
- entity-cer
- benchmark
size_categories:
- n<1K
EKA Hard — Medical ASR Benchmark
Entity-aware medical ASR benchmark — 50 hard rows from Indian-accented clinical speech.
Prepared by Trelis Research. Watch more on Youtube or inquire about our custom voice AI (ASR/TTS) services here.
Source
Derived from ekacare/eka-medical-asr-evaluation-dataset (3,619 EN rows, MIT license). Real clinical speech from 57 speakers across 4 Indian medical colleges, 16kHz mono.
Preparation
- Filter: audio ≥ 2s, text ≥ 20 chars
- Gemini Flash entity tagging (6 medical categories)
- Keep rows with ≥ 1 entity
- 3-model difficulty filter (whisper-large-v3, canary-1b-v2, Voxtral-Mini) with whisper-english normalization
- Top-50 by median entity CER
Entity categories
- drug — drug or medication names (brand or INN)
- condition — diagnoses, diseases, syndromes, disorders
- procedure — surgical, diagnostic, or therapeutic procedures
- anatomy — anatomical structures, organs, body regions
- biomarker — lab tests, biomarkers, genes, proteins, molecular markers
- organisation — hospitals, regulatory bodies, pharmaceutical companies
Columns
audio— 16kHz WAVtext— ground truth transcript (human-annotated)entities— JSON array of tagged medical entities withtext,category,char_start,char_enddifficulty_rank— 1 = hardestmedian_entity_cer— median entity CER across 3 difficulty-filter models
Leaderboard (16 models, sorted by Entity CER)
| # | Model | WER | CER | Entity CER | Results |
|---|---|---|---|---|---|
| 1 | gemini-2.5-pro | 0.150 | 0.078 | 0.210 | results |
| 2 | scribe-v2 | 0.273 | 0.154 | 0.279 | results |
| 3 | parakeet-tdt-0.6b-v3 | 0.376 | 0.206 | 0.309 | results |
| 4 | ursa-2-enhanced | 0.341 | 0.237 | 0.314 | results |
| 5 | universal-3-pro | 0.434 | 0.337 | 0.353 | results |
| 6 | nova-3 | 0.449 | 0.291 | 0.387 | results |
| 7 | canary-1b-v2 | 0.398 | 0.224 | 0.392 | results |
| 8 | whisper-large-v3-turbo | 0.351 | 0.216 | 0.394 | results |
| 9 | whisper-v3 (fireworks) | 0.439 | 0.268 | 0.414 | results |
| 10 | Voxtral-Mini-3B-2507 | 0.439 | 0.295 | 0.426 | results |
| 11 | MultiMed-ST (whisper-small-en) | 0.491 | 0.351 | 0.450 | results |
| 12 | whisper-base | 1.268 | 0.789 | 0.472 | results |
| 13 | medasr | 0.627 | 0.453 | 0.478 | results |
| 14 | whisper-tiny | 1.398 | 0.780 | 0.572 | results |
| 15 | whisper-large-v3 | 1.060 | 0.569 | 0.757 | results |
| 16 | whisper-small | 5.201 | 2.782 | 0.946 | results |
Evaluated with Trelis Studio, whisper-english normalization.