nouamanetazi HF staff commited on
Commit
adbdf93
1 Parent(s): 493aa68
app/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/__init__.cpython-310.pyc and b/app/__pycache__/__init__.cpython-310.pyc differ
 
app/__pycache__/config.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/config.cpython-310.pyc and b/app/__pycache__/config.cpython-310.pyc differ
 
app/__pycache__/db.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/db.cpython-310.pyc and b/app/__pycache__/db.cpython-310.pyc differ
 
app/__pycache__/init.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/init.cpython-310.pyc and b/app/__pycache__/init.cpython-310.pyc differ
 
app/__pycache__/leaderboard.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/leaderboard.cpython-310.pyc and b/app/__pycache__/leaderboard.cpython-310.pyc differ
 
app/__pycache__/llm.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/llm.cpython-310.pyc and b/app/__pycache__/llm.cpython-310.pyc differ
 
app/__pycache__/messages.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/messages.cpython-310.pyc and b/app/__pycache__/messages.cpython-310.pyc differ
 
app/__pycache__/models.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/models.cpython-310.pyc and b/app/__pycache__/models.cpython-310.pyc differ
 
app/__pycache__/ui.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/ui.cpython-310.pyc and b/app/__pycache__/ui.cpython-310.pyc differ
 
app/__pycache__/ui_battle.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/ui_battle.cpython-310.pyc and b/app/__pycache__/ui_battle.cpython-310.pyc differ
 
app/__pycache__/ui_leaderboard.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/ui_leaderboard.cpython-310.pyc and b/app/__pycache__/ui_leaderboard.cpython-310.pyc differ
 
app/__pycache__/utils.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/utils.cpython-310.pyc and b/app/__pycache__/utils.cpython-310.pyc differ
 
app/__pycache__/vote.cpython-310.pyc CHANGED
Binary files a/app/__pycache__/vote.cpython-310.pyc and b/app/__pycache__/vote.cpython-310.pyc differ
 
app/messages.py CHANGED
@@ -4,25 +4,10 @@ from .config import *
4
  # Messages #
5
  ############
6
 
7
- MUST_BE_LOGGEDIN = "Please login with Hugging Face to participate in the TTS Arena."
8
  DESCR = """
9
- # TTS Arena: Benchmarking TTS Models in the Wild
10
- Vote to help the community find the best available text-to-speech model!
11
- """.strip()
12
- BATTLE_INSTR = """
13
- ## Battle
14
- Choose 2 candidates and vote on which one is better! Currently in beta.
15
- * Input text (English only) to synthesize audio (or press 🎲 for random text).
16
- * Listen to the two audio clips, one after the other.
17
- * Vote on which audio sounds more natural to you.
18
- """
19
- INSTR = """
20
- ## Vote
21
- * Input text (English only) to synthesize audio (or press 🎲 for random text).
22
- * Listen to the two audio clips, one after the other.
23
- * Vote on which audio sounds more natural to you.
24
- * _Note: Model names are revealed after the vote is cast._
25
- Note: It may take up to 30 seconds to synthesize audio.
26
  """.strip()
27
  request = ""
28
  if SPACE_ID:
@@ -32,39 +17,28 @@ Please [create a Discussion](https://huggingface.co/spaces/{SPACE_ID}/discussion
32
  """
33
  ABOUT = f"""
34
  ## About
35
- The TTS Arena evaluates leading speech synthesis models. It is inspired by LMsys's [Chatbot Arena](https://chat.lmsys.org/).
36
  ### Motivation
37
- The field of speech synthesis has long lacked an accurate method to measure the quality of different models. Objective metrics like WER (word error rate) are unreliable measures of model quality, and subjective measures such as MOS (mean opinion score) are typically small-scale experiments conducted with few listeners. As a result, these measurements are generally not useful for comparing two models of roughly similar quality. To address these drawbacks, we are inviting the community to rank models in an easy-to-use interface, and opening it up to the public in order to make both the opportunity to rank models, as well as the results, more easily accessible to everyone.
38
  ### The Arena
39
- The leaderboard allows a user to enter text, which will be synthesized by two models. After listening to each sample, the user can vote on which model sounds more natural. Due to the risks of human bias and abuse, model names are revealed only after a vote is submitted.
40
- ### Credits
41
- Thank you to the following individuals who helped make this project possible:
42
- * VB ([Twitter](https://twitter.com/reach_vb) / [Hugging Face](https://huggingface.co/reach-vb))
43
- * Clémentine Fourrier ([Twitter](https://twitter.com/clefourrier) / [Hugging Face](https://huggingface.co/clefourrier))
44
- * Lucain Pouget ([Twitter](https://twitter.com/Wauplin) / [Hugging Face](https://huggingface.co/Wauplin))
45
- * Yoach Lacombe ([Twitter](https://twitter.com/yoachlacombe) / [Hugging Face](https://huggingface.co/ylacombe))
46
- * Main Horse ([Twitter](https://twitter.com/main_horse) / [Hugging Face](https://huggingface.co/main-horse))
47
- * Sanchit Gandhi ([Twitter](https://twitter.com/sanchitgandhi99) / [Hugging Face](https://huggingface.co/sanchit-gandhi))
48
- * Apolinário Passos ([Twitter](https://twitter.com/multimodalart) / [Hugging Face](https://huggingface.co/multimodalart))
49
- * Pedro Cuenca ([Twitter](https://twitter.com/pcuenq) / [Hugging Face](https://huggingface.co/pcuenq))
50
  {request}
51
  ### Privacy statement
52
- We may store text you enter and generated audio. We store a unique ID for each session. You agree that we may collect, share, and/or publish any data you input for research and/or commercial purposes.
53
  ### License
54
- Generated audio clips cannot be redistributed and may be used for personal, non-commercial use only.
55
- Random sentences are sourced from a filtered subset of the [Harvard Sentences](https://www.cs.columbia.edu/~hgs/audio/harvard.html).
56
  """.strip()
57
  LDESC = """
58
  ## 🏆 Leaderboard
59
- Vote to help the community determine the best language models.
60
  The leaderboard displays models in descending order based on votes cast by the community.
61
  Important: In order to help keep results fair, the leaderboard hides results by default until the number of votes passes a threshold.
62
  Tick the `Show preliminary results` to show models with few votes. Please note that preliminary results may be inaccurate.
63
  """.strip()
64
  ABOUT_MD = """
65
- # 🤖 LLM Arena
66
 
67
- A platform for comparing and ranking different Large Language Models through human feedback.
68
 
69
  ## How it works
70
 
@@ -72,10 +46,6 @@ A platform for comparing and ranking different Large Language Models through hum
72
  2. **Leaderboard**: See how models rank against each other based on user votes
73
  3. **Fair Comparison**: Models are randomly selected and anonymized during voting to prevent bias
74
 
75
- ## Contributing
76
-
77
- Want to add a new model? Check out our [GitHub repository](link-to-repo) for instructions.
78
-
79
  ## License
80
 
81
  This project is licensed under MIT License. Individual models may have their own licenses.
 
4
  # Messages #
5
  ############
6
 
7
+ MUST_BE_LOGGEDIN = "Please login with Hugging Face to participate in the Darija Arena."
8
  DESCR = """
9
+ # Darija Arena: Benchmarking Darija Models in the Wild
10
+ Vote to help the community find the best available Darija model!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  """.strip()
12
  request = ""
13
  if SPACE_ID:
 
17
  """
18
  ABOUT = f"""
19
  ## About
20
+ The Darija Arena evaluates leading Darija models. It is inspired by LMsys's [Chatbot Arena](https://chat.lmsys.org/) and [TTS Arena](https://huggingface.co/spaces/TTS-AGI/TTS-Arena).
21
  ### Motivation
22
+ The field of Darija language models has lacked a systematic way to evaluate and compare different models. While automated metrics exist, they often fail to capture the nuances of Darija dialect and cultural context. Additionally, existing evaluations are typically limited in scope and may not reflect real-world usage. To address these challenges, we are creating an open platform where the community can directly compare models through natural interactions and vote on which produces better responses. This crowdsourced approach allows for more comprehensive and practical evaluation of Darija language models while making the assessment process transparent and accessible to everyone.
23
  ### The Arena
24
+ The arena allows a user to enter a prompt, which will be answered by two language models. After reading each response, the user can vote on which model gave the better answer. Due to the risks of human bias and abuse, model names are revealed only after a vote is submitted.
 
 
 
 
 
 
 
 
 
 
25
  {request}
26
  ### Privacy statement
27
+ We may store text you enter and generated output. We store a unique ID for each session. You agree that we may collect, share, and/or publish any data you input for research and/or commercial purposes.
28
  ### License
29
+ Generated output cannot be redistributed and may be used for personal, non-commercial use only.
 
30
  """.strip()
31
  LDESC = """
32
  ## 🏆 Leaderboard
33
+ Vote to help the community determine the best Darija models.
34
  The leaderboard displays models in descending order based on votes cast by the community.
35
  Important: In order to help keep results fair, the leaderboard hides results by default until the number of votes passes a threshold.
36
  Tick the `Show preliminary results` to show models with few votes. Please note that preliminary results may be inaccurate.
37
  """.strip()
38
  ABOUT_MD = """
39
+ # 🤖 Darija Arena
40
 
41
+ A platform for comparing and ranking different Darija Large Language Models through human feedback.
42
 
43
  ## How it works
44
 
 
46
  2. **Leaderboard**: See how models rank against each other based on user votes
47
  3. **Fair Comparison**: Models are randomly selected and anonymized during voting to prevent bias
48
 
 
 
 
 
49
  ## License
50
 
51
  This project is licensed under MIT License. Individual models may have their own licenses.