Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update utils.py
Browse files
utils.py
CHANGED
@@ -31,25 +31,23 @@ LEADERBOARD_INTRODUCTION = """# MMLU-Pro Leaderboard
|
|
31 |
|
32 |
Welcome to the MMLU-Pro leaderboard, showcasing the performance of various advanced language models on the MMLU-Pro dataset. The MMLU-Pro dataset is an enhanced version of the original MMLU, specifically engineered to offer a more rigorous and realistic evaluation environment..
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
For detailed information about the dataset, visit our page on Hugging Face: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro. If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: https://github.com/TIGER-AI-Lab/MMLU-Pro.
|
|
|
37 |
"""
|
38 |
|
39 |
TABLE_INTRODUCTION = """
|
40 |
"""
|
41 |
|
42 |
LEADERBOARD_INFO = """
|
43 |
-
|
44 |
-
## 1. What's new about MMLU-Pro
|
45 |
-
|
46 |
-
Compared to the original MMLU, there are three major differences:
|
47 |
-
|
48 |
-
- The original MMLU dataset only contains 4 options, MMLU-Pro increases it to 10 options. The increase in options will make the evaluation more realistic and challenging. The random guessing will lead to a much lower score.
|
49 |
-
- The original MMLU dataset contains mostly knowledge-driven questions without requiring much reasoning. Therefore, PPL results are normally better than CoT. In our dataset, we increase the problem difficulty and integrate more reasoning-focused problems. In MMLU-Pro, CoT can be 20% higher than PPL.
|
50 |
-
- Due to the increase of options, we found that the model performance becomes more robust. For example, Llama-2-7B performance variance on MMLU-Pro is within 1% with several different prompts. In contrast, the performance variance on original MMLU can be as huge as 4-5%.
|
51 |
-
|
52 |
-
## 2. Dataset Summary
|
53 |
|
54 |
- **Questions and Options:** Each question within the dataset typically has **ten** multiple-choice options, except for some that were reduced during the manual review process to remove unreasonable choices. This increase from the original **four** options per question is designed to enhance complexity and robustness, necessitating deeper reasoning to discern the correct answer among a larger pool of potential distractors.
|
55 |
|
@@ -59,7 +57,6 @@ Compared to the original MMLU, there are three major differences:
|
|
59 |
- **TheoremQA:** High-quality human-annotated questions requiring theorems to solve.
|
60 |
- **Scibench:** Science questions from college exams.
|
61 |
|
62 |
-
|
63 |
"""
|
64 |
|
65 |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
|
|
|
31 |
|
32 |
Welcome to the MMLU-Pro leaderboard, showcasing the performance of various advanced language models on the MMLU-Pro dataset. The MMLU-Pro dataset is an enhanced version of the original MMLU, specifically engineered to offer a more rigorous and realistic evaluation environment..
|
33 |
|
34 |
+
## What's new about MMLU-Pro
|
35 |
+
|
36 |
+
Compared to the original MMLU, there are three major differences:
|
37 |
+
|
38 |
+
- The original MMLU dataset only contains 4 options, MMLU-Pro increases it to 10 options. The increase in options will make the evaluation more realistic and challenging. The random guessing will lead to a much lower score.
|
39 |
+
- The original MMLU dataset contains mostly knowledge-driven questions without requiring much reasoning. Therefore, PPL results are normally better than CoT. In our dataset, we increase the problem difficulty and integrate more reasoning-focused problems. In MMLU-Pro, CoT can be 20% higher than PPL.
|
40 |
+
- Due to the increase of options, we found that the model performance becomes more robust. For example, Llama-2-7B performance variance on MMLU-Pro is within 1% with several different prompts. In contrast, the performance variance on original MMLU can be as huge as 4-5%.
|
41 |
|
42 |
For detailed information about the dataset, visit our page on Hugging Face: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro. If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: https://github.com/TIGER-AI-Lab/MMLU-Pro.
|
43 |
+
Below you can find the accuracies of different models tested on this dataset.
|
44 |
"""
|
45 |
|
46 |
TABLE_INTRODUCTION = """
|
47 |
"""
|
48 |
|
49 |
LEADERBOARD_INFO = """
|
50 |
+
## Dataset Summary
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
- **Questions and Options:** Each question within the dataset typically has **ten** multiple-choice options, except for some that were reduced during the manual review process to remove unreasonable choices. This increase from the original **four** options per question is designed to enhance complexity and robustness, necessitating deeper reasoning to discern the correct answer among a larger pool of potential distractors.
|
53 |
|
|
|
57 |
- **TheoremQA:** High-quality human-annotated questions requiring theorems to solve.
|
58 |
- **Scibench:** Science questions from college exams.
|
59 |
|
|
|
60 |
"""
|
61 |
|
62 |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
|