mrfakename commited on
Commit
f7860cf
1 Parent(s): 6b71cc7

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -2
app.py CHANGED
@@ -14,7 +14,6 @@ with open('harvard_sentences.txt') as f:
14
  ####################################
15
  # Constants
16
  ####################################
17
- BLOG_POST_LINK = '' # <<<<< ----
18
  AVAILABLE_MODELS = {
19
  'XTTSv2': 'xtts',
20
  'WhisperSpeech': 'whisperspeech',
@@ -160,7 +159,9 @@ ABOUT = f"""
160
 
161
  The TTS Arena is a project created to evaluate leading speech synthesis models. It is inspired by the [Chatbot Arena](https://chat.lmsys.org/) by LMSys.
162
 
163
- For more information, please check out our [blog post]({BLOG_POST_LINK})
 
 
164
 
165
  ### Credits
166
 
 
14
  ####################################
15
  # Constants
16
  ####################################
 
17
  AVAILABLE_MODELS = {
18
  'XTTSv2': 'xtts',
19
  'WhisperSpeech': 'whisperspeech',
 
159
 
160
  The TTS Arena is a project created to evaluate leading speech synthesis models. It is inspired by the [Chatbot Arena](https://chat.lmsys.org/) by LMSys.
161
 
162
+ ### Motivation
163
+
164
+ The field of speech synthesis has long lacked an accurate method to measure the quality of different models. Objective metrics like WER (word error rate) are unreliable measures of model quality, and subjective measures such as MOS (mean opinion score) are typically small-scale experiments conducted with few listeners. As a result, these measurements are generally not useful for comparing two models of roughly similar quality. To address these drawbacks, we are inviting the community to rank models in an easy-to-use interface, and opening it up to the public in order to make both the opportunity to rank models, as well as the results, more easily accessible to everyone.
165
 
166
  ### Credits
167