File size: 8,118 Bytes
6ed68b6
460d762
 
 
 
5601a63
 
 
0227006
 
 
b3f0642
 
 
 
b29b985
 
 
 
460d762
b29b985
2a73469
 
 
 
46f8d78
2a73469
58733e4
46f8d78
6ed68b6
2a73469
 
6ed68b6
 
2a73469
6ed68b6
 
2a73469
 
 
 
6ed68b6
 
a7cba30
6ed68b6
 
2a73469
6ed68b6
 
2a73469
6ed68b6
58733e4
 
 
 
 
 
 
 
0227006
 
 
 
 
58733e4
 
 
5601a63
58733e4
 
 
 
0227006
 
 
 
 
 
 
 
 
58733e4
 
 
2a73469
 
 
 
 
 
 
d06dc21
 
e61a555
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a73469
 
0227006
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
CHANGELOG_TEXT = f"""
## [2023-06-16]
- Refactored code base
- Added new columns: number of parameters, hub likes, license

## [2023-06-13] 
- Adjust description for TruthfulQA

## [2023-06-12] 
- Add Human & GPT-4 Evaluations

## [2023-06-05] 
- Increase concurrent thread count to 40
- Search models on ENTER

## [2023-06-02] 
- Add a typeahead search bar
- Use webhooks to automatically spawn a new Space when someone opens a PR
- Start recording `submitted_time` for eval requests
- Limit AutoEvalColumn max-width

## [2023-05-30] 
- Add a citation button
- Simplify Gradio layout

## [2023-05-29] 
- Auto-restart every hour for the latest results
- Sync with the internal version (minor style changes)

## [2023-05-24] 
- Add a baseline that has 25.0 for all values
- Add CHANGELOG

## [2023-05-23] 
- Fix a CSS issue that made the leaderboard hard to read in dark mode

## [2023-05-22] 
- Display a success/error message after submitting evaluation requests
- Reject duplicate submission
- Do not display results that have incomplete results 
- Display different queues for jobs that are RUNNING, PENDING, FINISHED status 

## [2023-05-15] 
- Fix a typo: from "TruthQA" to "QA"

## [2023-05-10] 
- Fix a bug that prevented auto-refresh

## [2023-05-10] 
- Release the leaderboard to public
"""

TITLE = """<h1 align="center" id="space-title">πŸ€— Open LLM Leaderboard</h1>"""

INTRODUCTION_TEXT = f"""
πŸ“ With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art. The πŸ€— Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released. 

πŸ€— A key advantage of this leaderboard is that anyone from the community can submit a model for automated evaluation on the πŸ€— GPU cluster, as long as it is a πŸ€— Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as LLaMa.

πŸ“ˆ In the **first tab (LLM Benchmarks)**, we evaluate models on 4 key benchmarks from the <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank">  Eleuther AI Language Model Evaluation Harness </a>, a unified framework to test generative language models on a large number of different evaluation tasks. In the **second tab (Human & GPT Evaluations)**, the evaluations are performed by having humans and GPT-4 compare completions from a set of popular open-source language models (LLMs) on a secret set of instruction prompts.
"""

LLM_BENCHMARKS_TEXT = f"""
Evaluation is performed against 4 popular benchmarks:
- <a href="https://arxiv.org/abs/1803.05457" target="_blank">  AI2 Reasoning Challenge </a> (25-shot) - a set of grade-school science questions.
- <a href="https://arxiv.org/abs/1905.07830" target="_blank">  HellaSwag </a> (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- <a href="https://arxiv.org/abs/2009.03300" target="_blank">  MMLU </a>  (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- <a href="https://arxiv.org/abs/2109.07958" target="_blank">  TruthfulQA </a> (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.

We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
"""

HUMAN_GPT_EVAL_TEXT = f"""
Evaluation is performed by having humans and GPT-4 compare completions from a set of popular open-source language models (LLMs) on a secret set of instruction prompts. The prompts cover tasks such as brainstorming, creative generation, commonsense reasoning, open question answering, summarization, and code generation. Comparisons are made by humans and a model on a 1-8 Likert scale, where the labeler is required to choose a preference each time. Using these preferences, we create bootstrapped Elo rankings.

We collaborated with **Scale AI** to generate the completions using a professional data labeling workforce on their platform, [following the labeling instructions found here](https://docs.google.com/document/d/1c5-96Lj-UH4lzKjLvJ_MRQaVMjtoEXTYA4dvoAYVCHc/edit?usp=sharing). To understand the evaluation of popular models, we also had GPT-4 label the completions using this prompt.

For more information on the calibration and initiation of these measurements, please refer to the [announcement blog post](https://huggingface.co/blog/llm-leaderboard). We would like to express our gratitude to **LMSYS** for providing a [useful notebook](https://colab.research.google.com/drive/1lAQ9cKVErXI1rEYq7hTKNaCQ5Q8TzrI5?usp=sharing) for computing Elo estimates and plots.
"""


EVALUATION_QUEUE_TEXT = f"""
# Evaluation Queue for the πŸ€— Open LLM Leaderboard, these models will be automatically evaluated on the πŸ€— cluster
"""

CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""@misc{open-llm-leaderboard,
  author = {Edward Beeching, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, Thomas Wolf},
  title = {Open LLM Leaderboard},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"

}
@software{eval-harness,
  author       = {Gao, Leo and
                  Tow, Jonathan and
                  Biderman, Stella and
                  Black, Sid and
                  DiPofi, Anthony and
                  Foster, Charles and
                  Golding, Laurence and
                  Hsu, Jeffrey and
                  McDonell, Kyle and
                  Muennighoff, Niklas and
                  Phang, Jason and
                  Reynolds, Laria and
                  Tang, Eric and
                  Thite, Anish and
                  Wang, Ben and
                  Wang, Kevin and
                  Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = sep,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.0.1},
  doi          = {10.5281/zenodo.5371628},
  url          = {https://doi.org/10.5281/zenodo.5371628}
}
@misc{clark2018think,
      title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, 
      author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
      year={2018},
      eprint={1803.05457},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
@misc{zellers2019hellaswag,
      title={HellaSwag: Can a Machine Really Finish Your Sentence?}, 
      author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
      year={2019},
      eprint={1905.07830},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{hendrycks2021measuring,
      title={Measuring Massive Multitask Language Understanding}, 
      author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
      year={2021},
      eprint={2009.03300},
      archivePrefix={arXiv},
      primaryClass={cs.CY}
}
@misc{lin2022truthfulqa,
      title={TruthfulQA: Measuring How Models Mimic Human Falsehoods}, 
      author={Stephanie Lin and Jacob Hilton and Owain Evans},
      year={2022},
      eprint={2109.07958},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}"""

VISUALIZATION_TITLE = """<h1 align="center" id="space-title">πŸ“Š Visualizations</h1>"""

PLOT_1_TITLE = "Fraction of Model A Wins for All Non-tied A vs. B Comparisons"

PLOT_2_TITLE = "Comparison Count of Each Combination of Models (not allowing ties)"

PLOT_3_TITLE = "Elo Estimates with error bars (ties allowed)"

PLOT_4_TITLE = "Fraction of Model A Wins for All Non-tied A vs. B Comparisons"