navidved's picture
Update constants.py
c8534e0 verified
raw
history blame
5.86 kB
from pathlib import Path
# Directory where model requests are stored
DIR_OUTPUT_REQUESTS = Path("requested_models")
EVAL_REQUESTS_PATH = Path("eval_requests")
##########################
# Text definitions #
##########################
banner_url = "https://cdn-thumbnails.huggingface.co/social-thumbnails/spaces/k2-fsa/automatic-speech-recognition.png"
BANNER = f'<div style="display: flex; justify-content: space-around;"><img src="{banner_url}" alt="Banner" style="width: 40vw; min-width: 300px; max-width: 600px;"> </div>'
INTRODUCTION_TEXT = """
πŸ“ The πŸ€— **Persian Automatic Speech Recognition (ASR) Leaderboard** serves as an authoritative ranking of speech recognition models hosted on the Hugging Face Hub, evaluated using multiple Persian speech datasets.
We report two key performance metrics: [Word Error Rate (WER)](https://huggingface.co/spaces/evaluate-metric/wer) and [Character Error Rate (CER)](https://huggingface.co/spaces/evaluate-metric/cer), where lower scores indicate better performance.
The leaderboard primarily ranks models based on WER, from lowest to highest. You can refer to the πŸ“ˆ **Metrics** tab for a detailed explanation of how these models are evaluated.
If there is a model you'd like to see ranked but is not listed here, you may submit a request for evaluation by following the instructions in the "Request a Model" tab βœ‰οΈβœ¨.
This leaderboard is intended to provide a comparative analysis of Persian ASR models based on their ability to recognize speech in various Persian dialects and settings.
### Persian ASR Model Rankings
Below is a list of models currently ranked on the Persian ASR Leaderboard. Each model has been evaluated across multiple Persian speech datasets to provide an accurate comparison based on their performance in recognizing Persian speech.
1. **navidved/Goya-v1**
A high-performing ASR model with particularly strong results on the Persian ASR Test Set. It shows a low WER and CER across various datasets, making it one of the top choices for Persian speech recognition.
2. **openai/whisper-large-v3**
This model performs reasonably well on the ASR Farsi YouTube dataset, though it struggles more on the Persian ASR Test Set, indicating that it may be better suited for more casual or non-technical speech environments.
3. **ghofrani/xls-r-1b-fa-cv8**
With balanced performance across all datasets, this model offers decent accuracy for both word and character recognition but faces challenges on more controlled datasets like the Persian ASR Test Set.
4. **jonatasgrosman/wav2vec2-large-xlsr-53-persian**
A reliable ASR model that performs well on the Common Voice dataset but sees reduced accuracy in the more challenging Persian ASR Test Set and YouTube data. Suitable for more common conversational speech.
5. **m3hrdadfi/wav2vec2-large-xlsr-persian-shemo**
This model is better suited for informal contexts, with higher WER and CER values across all datasets. It may struggle in more complex or structured speech recognition tasks.
6. **openai/whisper-large-v2**
With the highest WER and CER across all datasets, this model underperforms in Persian speech recognition tasks, particularly on more difficult datasets like the Persian ASR Test Set.
"""
CITATION_TEXT = """@misc{persian-asr-leaderboard,
title = {Persian Automatic Speech Recognition Leaderboard},
author = {Navid},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/your-username/persian_asr_leaderboard}"
}
"""
METRICS_TAB_TEXT = """
# Evaluation Metrics and Datasets
## Metrics
We employ the following metrics to evaluate the performance of Automatic Speech Recognition (ASR) models:
- **Word Error Rate (WER)**: WER quantifies the proportion of incorrectly predicted words in a transcription. A lower WER reflects higher accuracy.
- **Character Error Rate (CER)**: CER measures errors at the character level, providing a more granular view of transcription accuracy, especially in morphologically rich languages such as Persian.
Both metrics are widely used in ASR evaluation, offering a comprehensive view of model performance.
## Datasets
The models on the Persian ASR Leaderboard are evaluated using a diverse range of datasets to ensure robust performance across different speech conditions:
1. **Persian Common Voice (Mozilla)**
Available [here](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0), this dataset is part of the broader Common Voice project and features speech data from various speakers, accents, and environments. It serves as a representative benchmark for Persian ASR.
2. **ASR Farsi YouTube Chunked 10 Seconds**
This dataset, available on Hugging Face [here](https://huggingface.co/datasets/pourmand1376/asr-farsi-youtube-chunked-10-seconds), consists of transcribed speech from Persian YouTube videos, split into 10-second segments. It introduces variability in audio quality and speaker demographics, adding to the challenge of accurate recognition.
3. **Persian-ASR-Test-Set (Private)**
This private dataset is designed for in-depth model testing and evaluation. It contains curated, real-world Persian speech data from various contexts and speaker backgrounds. Access to this dataset is restricted, ensuring models are evaluated on a controlled, high-quality speech corpus.
## How to Submit Your Model for Evaluation
To request that a model be included on this leaderboard, please submit its name in the following format: `username/model_name`. Models should be available on the Hugging Face Hub for public access.
Simply navigate to the "Request a Model" tab, enter the details, and your model will be evaluated at the next available opportunity.
"""
print("Content prepared for the Persian ASR leaderboard.")