kaikaidai commited on
Commit
ace4c98
1 Parent(s): c29f61a

Improved wording

Browse files
Files changed (1) hide show
  1. common.py +15 -4
common.py CHANGED
@@ -1,10 +1,8 @@
1
  # Page Headers
2
- MAIN_TITLE = "# Judge Arena - Test anonymous LLM judges side-by-side"
3
- SUBTITLE = "*Free LLM Evals to test your GenAI application.*"
4
 
5
  # How it works section
6
  HOW_IT_WORKS = """
7
- # How it works:
8
  - **Run any form of evaluation:** from simple hallucination detection to qualitative interpretations
9
  - **Evaluate anything:** coding, analysis, creative writing, math, or general knowledge
10
  """
@@ -13,7 +11,8 @@ BATTLE_RULES = """
13
  ## 🤺 Battle Rules:
14
  - Both AIs stay anonymous - if either reveals its identity, the duel is void
15
  - Choose the LLM judge that most aligns with your judgement
16
- - If both score the same - choose the critique that you prefer more!\n
 
17
  """
18
 
19
  # CSS Styles
@@ -29,6 +28,18 @@ CSS_STYLES = """
29
  """
30
 
31
  # Default Eval Prompt
 
 
 
 
 
 
 
 
 
 
 
 
32
  DEFAULT_EVAL_PROMPT = """You are assessing a chat bot response to a user's input based on the helpfulness of the response.
33
 
34
  Score:
 
1
  # Page Headers
2
+ MAIN_TITLE = "# Judge Arena - Free LLM Evals to test your GenAI application"
 
3
 
4
  # How it works section
5
  HOW_IT_WORKS = """
 
6
  - **Run any form of evaluation:** from simple hallucination detection to qualitative interpretations
7
  - **Evaluate anything:** coding, analysis, creative writing, math, or general knowledge
8
  """
 
11
  ## 🤺 Battle Rules:
12
  - Both AIs stay anonymous - if either reveals its identity, the duel is void
13
  - Choose the LLM judge that most aligns with your judgement
14
+ - If both score the same - choose the critique that you prefer more!
15
+ <br><br>
16
  """
17
 
18
  # CSS Styles
 
28
  """
29
 
30
  # Default Eval Prompt
31
+ EVAL_DESCRIPTION = """
32
+ ## 📝 Instructions
33
+ **Precise evaluation criteria leads to more consistent and reliable judgments.** A good evaluation prompt should include the following:
34
+ - Evaluation criteria
35
+ - Scoring rubric
36
+ - (Optional) Examples\n
37
+
38
+ **Any variables you define in your prompt using {{double curly braces}} will automatically map to the corresponding input fields under "Sample to evaluate" section on the right.**
39
+
40
+ <br><br>
41
+ """
42
+
43
  DEFAULT_EVAL_PROMPT = """You are assessing a chat bot response to a user's input based on the helpfulness of the response.
44
 
45
  Score: