wasertech commited on
Commit
402bc1f
1 Parent(s): 8fd660c

Update constants.py

Browse files
Files changed (1) hide show
  1. constants.py +3 -3
constants.py CHANGED
@@ -110,9 +110,9 @@ Columns `Model`, `RTF`, and `Average WER` were sourced from [hf-audio/open_asr_l
110
  Models are sorted by consistancy in their results across testsets. (by increasing order of absolute delta between average WER and CommonVoice WER)
111
 
112
  ### Results
113
- The CommonVoice Test provides a Word Error Rate (WER) within a 20-point margin of the average WER.
114
 
115
- While not perfect, this indicates that CommonVoice can be a useful tool for quickly identifying a suitable ASR model for a wide range of languages in a programmatic manner. However, it's important to note that it is not sufficient as the sole criterion for choosing the most appropriate architecture. Further considerations may be needed depending on the specific requirements of your ASR application.
116
 
117
- Furthermore, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
118
  """
 
110
  Models are sorted by consistancy in their results across testsets. (by increasing order of absolute delta between average WER and CommonVoice WER)
111
 
112
  ### Results
113
+ "The CommonVoice Test provides a Word Error Rate (WER) within a 20-point margin of the average WER. While not perfect, this indicates that CommonVoice can be a useful tool for quickly identifying a suitable ASR model for a wide range of languages in a programmatic manner. However, it's important to note that it is not sufficient as the sole criterion for choosing the most appropriate architecture. Further considerations may be needed depending on the specific requirements of your ASR application.
114
 
115
+ Moreover, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context, and it's worth considering. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
116
 
117
+ Additionally, it's been brought to our attention that Nvidia's models, trained using Nemo with custom splits from common datasets, including Common Voice, may have had an advantage due to their familiarity with parts of the Common Voice test set. This could explain their strong performance in the results. Transparency in model training and dataset usage is crucial for fair comparisons in the ASR field and ensuring that results align with real-world scenarios.
118
  """