Spaces:
Running
Running
Create common.py
Browse files
common.py
ADDED
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Page Headers
|
2 |
+
MAIN_TITLE = "# Judge Arena - Test anonymous LLM judges side-by-side"
|
3 |
+
SUBTITLE = "*Free LLM Evals to test your GenAI application.*"
|
4 |
+
|
5 |
+
# How it works section
|
6 |
+
HOW_IT_WORKS = """
|
7 |
+
# How it works:
|
8 |
+
- **Run any form of evaluation:** from simple hallucination detection to qualitative interpretations
|
9 |
+
- **Evaluate anything:** coding, analysis, creative writing, math, or general knowledge
|
10 |
+
"""
|
11 |
+
|
12 |
+
BATTLE_RULES = """
|
13 |
+
## 🤺 Battle Rules:
|
14 |
+
- Both AIs stay anonymous - if either reveals its identity, the duel is void
|
15 |
+
- Choose the LLM judge that most aligns with your judgement
|
16 |
+
- If both score the same - choose the critique that you prefer more!\n
|
17 |
+
"""
|
18 |
+
|
19 |
+
# CSS Styles
|
20 |
+
CSS_STYLES = """
|
21 |
+
.prompt-row {
|
22 |
+
align-items: flex-start !important;
|
23 |
+
}
|
24 |
+
.send-button-row {
|
25 |
+
display: flex;
|
26 |
+
justify-content: flex-end;
|
27 |
+
margin-top: 8px;
|
28 |
+
}
|
29 |
+
"""
|
30 |
+
|
31 |
+
# Default Eval Prompt
|
32 |
+
DEFAULT_EVAL_PROMPT = """You are assessing a chat bot response to a user's input based on the helpfulness of the response.\n
|
33 |
+
|
34 |
+
Score:
|
35 |
+
|
36 |
+
A score of 1 means that the response's answer meets all of the evaluation criteria.
|
37 |
+
|
38 |
+
A score of 0 means that the response's answer does not meet all of the evaluation criteria.
|
39 |
+
|
40 |
+
Here is the data:\n
|
41 |
+
|
42 |
+
[BEGIN DATA]
|
43 |
+
|
44 |
+
***
|
45 |
+
|
46 |
+
[User Query]: {{input}}
|
47 |
+
|
48 |
+
***
|
49 |
+
|
50 |
+
[Response]: {{response}}
|
51 |
+
|
52 |
+
***
|
53 |
+
|
54 |
+
[END DATA]"""
|
55 |
+
|
56 |
+
# Default Variable Values
|
57 |
+
DEFAULT_INPUT = """Which of these animals is least likely to be found in a rainforest?"
|
58 |
+
A) Jaguar
|
59 |
+
B) Toucan
|
60 |
+
C) Polar Bear
|
61 |
+
D) Sloth"""
|
62 |
+
DEFAULT_RESPONSE = "C) Polar Bear"
|
63 |
+
|
64 |
+
# Voting Section Header
|
65 |
+
VOTING_HEADER = """
|
66 |
+
# Start Voting Now
|
67 |
+
"""
|
68 |
+
|
69 |
+
# Acknowledgements
|
70 |
+
ACKNOWLEDGEMENTS = """
|
71 |
+
<br><br><br>
|
72 |
+
# Acknowledgements
|
73 |
+
|
74 |
+
We thank [LMSYS Org](https://lmsys.org/) for their hard work on the Chatbot Arena and fully credit them for the inspiration to build this.
|
75 |
+
|
76 |
+
We thank [Clementine Fourrier](https://huggingface.co/clefourrier) and Hugging Face for their guidance and partnership in setting this up.
|
77 |
+
"""
|
78 |
+
|
79 |
+
# Policy Content
|
80 |
+
POLICY_CONTENT = """
|
81 |
+
# About Atla
|
82 |
+
|
83 |
+
Atla is an applied research organization that trains models as evaluators to capture human preferences. We're a team of researchers, engineers, and operational leaders, with experience spanning a variety of disciplines, all working together to build reliable and understandable AI systems. Our research is informed by our experiences conducting AI safety research at the UK AI Task Force, OpenAI and the Stanford Existential Risks Initiative.
|
84 |
+
|
85 |
+
# Our Mission
|
86 |
+
|
87 |
+
By creating advanced evaluation models, we enable AI developers to identify and fix risks, leading to safer, more reliable AI that can be trusted and widely used. Our aim is to surpass the current state-of-the-art evaluation methods by training models specifically for evaluation. AIs will probably become very powerful, and perform tasks that are difficult for us to verify. We want to enable humans to oversee AI systems that are solving tasks too difficult for humans to evaluate. We have written more about [our approach to scalable oversight](https://www.atla-ai.com/post/scaling-alignment) on our blog.
|
88 |
+
|
89 |
+
# Judge Arena Policy
|
90 |
+
|
91 |
+
## Overview
|
92 |
+
|
93 |
+
Judge Arena is an open-source platform dedicated to improving the standard of evaluation of generative AI models in their role as judges. Users can run evals and assess anonymized responses from two competing model judges, choosing the better judgement or declaring a tie. This policy outlines our commitments and guidelines to ensure a fair, open, and collaborative environment for both users and model providers.
|
94 |
+
|
95 |
+
## Transparency
|
96 |
+
|
97 |
+
- **Open-Source**: Judge Arena's code is open-source and available on GitHub. This approach allows anyone to review, replicate, or modify the platform to suit their needs. We use proprietary model provider APIs where provided and Together AI's API to serve leading open-source models.
|
98 |
+
- **Community Engagement**: We actively encourage contributions from the community. Feedback, code contributions, and discussions are welcome to improve the platform's functionality, fairness, and transparency.
|
99 |
+
- **Methodology**: All processes related to model evaluation, rating calculations, and model selection are openly documented. This transparency ensures that our processes are understandable and reproducible by others.
|
100 |
+
- **Data Sharing**: Periodically, we will share 20% of the collected evaluation data with the community. This data includes anonymized prompts, model responses, and aggregated evaluation results.
|
101 |
+
|
102 |
+
## Model Inclusion Criteria
|
103 |
+
|
104 |
+
Judge Arena is specifically designed to assess AI models that function as evaluators (a.k.a judges), including but not limited to powerful general-purpose models and the latest language models designed for evaluation tasks. Models are eligible for inclusion if they meet the following criteria:
|
105 |
+
|
106 |
+
- **Judge Capability**: The model must possess the ability to score AND critique responses, content, or other models' outputs effectively.
|
107 |
+
- **Adaptable:** The model must be prompt-able to be evaluate in different scoring formats, for different criteria.
|
108 |
+
- **Accessibility**:
|
109 |
+
- **Public API Access**: Models accessible through public APIs without restrictive barriers.
|
110 |
+
- **Open-Source Models**: Models with publicly available weights that can be downloaded and run by the community.
|
111 |
+
|
112 |
+
## Evaluation Methodology
|
113 |
+
|
114 |
+
- **User Participation**: Users run evaluations and select preferred model responses based on quality, relevance, and accuracy contributing to the model's overall rating.
|
115 |
+
- **Blind Testing**: All model evaluations are conducted blindly. Users are not informed which model produced which response to eliminate bias.
|
116 |
+
- **Data Collection**: We collect sufficient data to ensure statistical significance in our evaluations. We additionally show the 95% confidence interval in the leaderboard to provide a signal of reliability.
|
117 |
+
- **Anomaly Detection**: We monitor user activity to detect and mitigate anomalous behavior or voting patterns that could skew results.
|
118 |
+
|
119 |
+
## Leaderboard Management
|
120 |
+
|
121 |
+
- **ELO Ranking System**: Models are ranked on a public leaderboard based on aggregated user evaluations. We use an ELO rating system to rank AI judges on the public leaderboard. Each model begins with an initial rating of 1500 (as is used by the International Chess Federation), and we use a K-factor of 32 to determine the maximum rating adjustment after each evaluation.
|
122 |
+
- **Minimum Period**: Listed models remain accessible on Judge Arena for a minimum period of two weeks to allow for comprehensive community evaluation.
|
123 |
+
- **Deprecation Policy**: Models may be removed from the leaderboard if they become inaccessible, are no longer publicly available.
|
124 |
+
|
125 |
+
## Privacy and Data Protection
|
126 |
+
|
127 |
+
- **Anonymization**: All shared data is anonymized to prevent the identification of individual users.
|
128 |
+
|
129 |
+
## Policy Updates and Communication
|
130 |
+
|
131 |
+
- **Ongoing Revisions**: This policy may be updated to reflect changes in our practices or in response to community feedback.
|
132 |
+
- **Notification of Changes**: Policy changes will be communicated to users and stakeholders on this page.
|
133 |
+
|
134 |
+
# FAQ
|
135 |
+
|
136 |
+
**Isn't this the same as Chatbot Arena?**
|
137 |
+
|
138 |
+
- We are big fans of what the LMSYS team have done with Chatbot Arena and fully credit them for the inspiration to develop this. We were looking for a dynamic leaderboard that graded on AI judge capabilities and didn't manage to find one, so we created Judge Arena. This UI is designed especially for evals; to match the format of the model-based eval prompts that you would use in your LLM evaluation / monitoring tool.
|
139 |
+
|
140 |
+
\n\n**Why should I trust this leaderboard?**
|
141 |
+
|
142 |
+
- We have listed out our efforts to be fully transparent in the policies above. All of the code for this leaderboard is open-source and can be found on our [Github](https://github.com/atla-ai/judge-arena).
|
143 |
+
|
144 |
+
\n\n**Who funds this effort?**
|
145 |
+
|
146 |
+
- Atla currently funds this out of our own pocket. We are looking for API credits (with no strings attached) to support this effort - please get in touch if you or someone you know might be able to help.
|
147 |
+
|
148 |
+
\n\n**What is Atla working on?**
|
149 |
+
|
150 |
+
- We are training a general-purpose evaluator that you will soon be able to run in this Judge Arena. Our next step will be to open-source a powerful model that the community can use to run fast and accurate evaluations.
|
151 |
+
|
152 |
+
## Get in touch
|
153 |
+
Feel free to email us at [support@atla-ai.com](mailto:support@atla-ai.com) or leave feedback on our [Github](https://github.com/atla-ai/judge-arena)!"""
|