sunitha98 commited on
Commit
ab74ccc
1 Parent(s): 629ae03

update about

Browse files
Files changed (1) hide show
  1. src/display/about.py +16 -19
src/display/about.py CHANGED
@@ -27,7 +27,7 @@ TITLE = """<h1 align="center" id="space-title">Patronus AI leaderboard</h1>"""
27
 
28
  # What does your leaderboard evaluate?
29
  INTRODUCTION_TEXT = """
30
- This leaderboard evaluates the performance of models on real-world enterprise use cases.
31
  """
32
 
33
  # Which evaluations are you running? how can people reproduce what you have?
@@ -35,35 +35,32 @@ LLM_BENCHMARKS_TEXT = f"""
35
  ## How it works
36
 
37
  ## Tasks
38
- 1. FinanceBench (Islam, Pranab, et al. "FinanceBench: A New Benchmark for Financial Question Answering."): The task
39
- measures the ability to answer financial questions given the retrieved context from a document and a question. We do
40
- not evaluate the retrieval capabilities for this task. We evaluate the accuracy of the answers. The dataset can be
41
  found at https://huggingface.co/datasets/PatronusAI/financebench.
42
 
43
- 2. Legal Confidentiality: We use a subset of 100 labelled prompts from LegalBench (Guha, et al. LegalBench: A
44
- Collaboratively Built Benchmark for Measuring Legal Reasoning in \
45
- Large Language Models) to measure the ability of LLMs to reason over legal causes. The model is prompted to return \
46
- yes/no as an answer to the question.
47
 
48
- 3. Writing Prompts: This task evaluates the story-writing and creative abilities of the LLM. We measure the
49
- engagingness of the text generated by the LLM.
50
 
51
- 4. Customer Support Dialogue: This task evaluates the ability of the LLM to answer a customer support question
52
- given some product information and conversational history. We measure the relevance of the generation given the
53
- conversational history, product information and question by the customer.
54
 
55
- 5. Toxic Prompts: This task evaluates the safety of the model by using prompts that can elicit harmful information from
56
- LLMs. We measure if the model generates toxic content.
57
 
58
- 6. Enterprise PII: This task evaluates the business safety of the model by using prompts to elicit
59
- business-sensitive information from LLMs. We measure if the model generates business sensitive information.
60
 
61
- ## Reproducibility
62
- All of our datasets are closed-source. We provide a validation set with 5 examples for each of the tasks.
 
 
 
 
 
63
 
64
 
65
  """
66
 
 
67
  EVALUATION_QUEUE_TEXT = """
68
  ## Some good practices before submitting a model
69
 
 
27
 
28
  # What does your leaderboard evaluate?
29
  INTRODUCTION_TEXT = """
30
+ Patronus AI leaderboard evaluates the performance of language models on real-world enterprise use cases. We provide 6 benchmarks that cover diverse tasks. Some of our test sets are closed source. The primary motivation behind this is to prevent gaming of the leaderboard by fine-tuning models on our test sets. Validation sets for each of the tasks can be found in https://huggingface.co/PatronusAI.
31
  """
32
 
33
  # Which evaluations are you running? how can people reproduce what you have?
 
35
  ## How it works
36
 
37
  ## Tasks
38
+ 1. FinanceBench (Islam, Pranab, et al. "FinanceBench: A New Benchmark for Financial Question Answering."): The task measures the ability to answer financial questions given the retrieved context from a document and a question. We do not evaluate the retrieval capabilities for this task. We only evaluate the accuracy of the answers.The dataset can be
 
 
39
  found at https://huggingface.co/datasets/PatronusAI/financebench.
40
 
41
+ 2. Legal Confidentiality: We use a subset of 100 labeled prompts from LegalBench (Guha, et al. LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in \
42
+ Large Language Models) to measure the ability of LLMs to reason over legal causes. The model is prompted to return yes/no as an answer to the question.
 
 
43
 
44
+ 3. Writing Prompts: This task evaluates the story-writing and creative abilities of the LLM. We measure the engagingness of the text generated by the LLM. The dataset is a mix of human annotated samples from r/WritingPrompts and redteaming generations.
 
45
 
46
+ 4. Customer Support Dialogue: This task evaluates the ability of the LLM to answer a customer support question given some product information and conversational history. We measure the relevance of the generation given the conversational history, product information and question by the customer.
 
 
47
 
48
+ 5. Toxic Prompts: This task evaluates the safety of the model by using prompts that can elicit harmful information from LLMs. We measure if the model generates toxic content.
 
49
 
50
+ 6. Enterprise PII: This task evaluates the business safety of the model by using prompts to elicit business-sensitive information from LLMs. We measure if the model generates business sensitive information.
 
51
 
52
+ ## What is Patronus AI?
53
+
54
+ Patronus AI provides an automated evaluation platform for LLMs. Our platform allows companies to manage evaluation runs, monitor LLMs in production and find edge cases where models will fail. We provide auto-generation of adversarial test sets along with Patronus-managed datasets to find failure cases.
55
+
56
+ To learn more about us, visit: https://www.patronus.ai/
57
+
58
+ To contact us, please reach out at contact@patronus.ai.
59
 
60
 
61
  """
62
 
63
+
64
  EVALUATION_QUEUE_TEXT = """
65
  ## Some good practices before submitting a model
66