Ludwig Stumpp commited on
Commit
49b476f
1 Parent(s): 9a87dbf

Add HumanEval and Starcoder

Browse files
Files changed (1) hide show
  1. README.md +37 -22
README.md CHANGED
@@ -22,34 +22,48 @@ We are always happy for contributions! You can contribute by the following:
22
 
23
  ## Leaderboard
24
 
25
- | Model Name | Chatbot Arena Elo | LAMBADA (zero-shot) | TriviaQA (zero-shot) |
26
- | -------------------------------------------------------------------------------------- | ------------------------------------------------ | --------------------------------------------- | --------------------------------------------- |
27
- | [alpaca-13b](https://crfm.stanford.edu/2023/03/13/alpaca.html) | [1008](https://lmsys.org/blog/2023-05-03-arena/) | | |
28
- | [cerebras-gpt-7b](https://huggingface.co/cerebras/Cerebras-GPT-6.7B) | | [0.636](https://www.mosaicml.com/blog/mpt-7b) | [0.141](https://www.mosaicml.com/blog/mpt-7b) |
29
- | [cerebras-gpt-13b](https://huggingface.co/cerebras/Cerebras-GPT-13B) | | [0.635](https://www.mosaicml.com/blog/mpt-7b) | [0.146](https://www.mosaicml.com/blog/mpt-7b) |
30
- | [chatglm-6b](https://chatglm.cn/blog) | [985](https://lmsys.org/blog/2023-05-03-arena/) | | |
31
- | [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b) | [944](https://lmsys.org/blog/2023-05-03-arena/) | | |
32
- | [eleuther-pythia-7b](https://huggingface.co/EleutherAI/pythia-6.9b) | | [0.667](https://www.mosaicml.com/blog/mpt-7b) | [0.198](https://www.mosaicml.com/blog/mpt-7b) |
33
- | [eleuther-pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) | | [0.704](https://www.mosaicml.com/blog/mpt-7b) | [0.233](https://www.mosaicml.com/blog/mpt-7b) |
34
- | [fastchat-t5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) | [951](https://lmsys.org/blog/2023-05-03-arena/) | | |
35
- | [gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) | | [0.719](https://www.mosaicml.com/blog/mpt-7b) | [0.347](https://www.mosaicml.com/blog/mpt-7b) |
36
- | [gptj-6b](https://huggingface.co/EleutherAI/gpt-j-6b) | | [0.683](https://www.mosaicml.com/blog/mpt-7b) | [0.234](https://www.mosaicml.com/blog/mpt-7b) |
37
- | [koala-13b](https://bair.berkeley.edu/blog/2023/04/03/koala/) | [1082](https://lmsys.org/blog/2023-05-03-arena/) | | |
38
- | [llama-7b](https://arxiv.org/abs/2302.13971) | | [0.738](https://www.mosaicml.com/blog/mpt-7b) | [0.443](https://www.mosaicml.com/blog/mpt-7b) |
39
- | [llama-13b](https://arxiv.org/abs/2302.13971) | [932](https://lmsys.org/blog/2023-05-03-arena/) | | |
40
- | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | | [0.702](https://www.mosaicml.com/blog/mpt-7b) | [0.343](https://www.mosaicml.com/blog/mpt-7b) |
41
- | [oasst-pythia-12b](https://huggingface.co/OpenAssistant/pythia-12b-pre-v8-12.5k-steps) | [1065](https://lmsys.org/blog/2023-05-03-arena/) | | |
42
- | [opt-7b](https://huggingface.co/facebook/opt-6.7b) | | [0.677](https://www.mosaicml.com/blog/mpt-7b) | [0.227](https://www.mosaicml.com/blog/mpt-7b) |
43
- | [opt-13b](https://huggingface.co/facebook/opt-13b) | | [0.692](https://www.mosaicml.com/blog/mpt-7b) | [0.282](https://www.mosaicml.com/blog/mpt-7b) |
44
- | [stablelm-base-alpha-7b](https://huggingface.co/stabilityai/stablelm-base-alpha-7b) | | [0.533](https://www.mosaicml.com/blog/mpt-7b) | [0.049](https://www.mosaicml.com/blog/mpt-7b) |
45
- | [stablelm-tuned-alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) | [858](https://lmsys.org/blog/2023-05-03-arena/) | | |
46
- | [vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0) | [1169](https://lmsys.org/blog/2023-05-03-arena/) | | |
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ## Benchmarks
49
 
50
  | Benchmark Name | Author | Link | Description |
51
  | ----------------- | -------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
52
  | Chatbot Arena Elo | LMSYS | https://lmsys.org/blog/2023-05-03-arena/ | "In this blog post, we introduce Chatbot Arena, an LLM benchmark platform featuring anonymous randomized battles in a crowdsourced manner. Chatbot Arena adopts the Elo rating system, which is a widely-used rating system in chess and other competitive games." (Source: https://lmsys.org/blog/2023-05-03-arena/) |
 
53
  | LAMBADA | Paperno et al. | https://arxiv.org/abs/1606.06031 | "The LAMBADA evaluates the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse." (Source: https://huggingface.co/datasets/lambada) |
54
  | TriviaQA | Joshi et al. | https://arxiv.org/abs/1705.03551v2 | "We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions." (Source: https://arxiv.org/abs/1705.03551v2) |
55
 
@@ -59,3 +73,4 @@ We are always happy for contributions! You can contribute by the following:
59
  | -------- | ---------------------------------------- |
60
  | LMSYS | https://lmsys.org/blog/2023-05-03-arena/ |
61
  | MOSAICML | https://www.mosaicml.com/blog/mpt-7b |
 
 
22
 
23
  ## Leaderboard
24
 
25
+ | Model Name | Chatbot Arena Elo | HumanEval-Python (pass@1) | LAMBADA (zero-shot) | TriviaQA (zero-shot) |
26
+ | ------------------------------------------------------------------------------------------- | ------------------------------------------------ | ------------------------------------------------------------------------------ | --------------------------------------------- | --------------------------------------------- |
27
+ | [alpaca-13b](https://crfm.stanford.edu/2023/03/13/alpaca.html) | [1008](https://lmsys.org/blog/2023-05-03-arena/) | | | |
28
+ | [cerebras-gpt-7b](https://huggingface.co/cerebras/Cerebras-GPT-6.7B) | | | [0.636](https://www.mosaicml.com/blog/mpt-7b) | [0.141](https://www.mosaicml.com/blog/mpt-7b) |
29
+ | [cerebras-gpt-13b](https://huggingface.co/cerebras/Cerebras-GPT-13B) | | | [0.635](https://www.mosaicml.com/blog/mpt-7b) | [0.146](https://www.mosaicml.com/blog/mpt-7b) |
30
+ | [chatglm-6b](https://chatglm.cn/blog) | [985](https://lmsys.org/blog/2023-05-03-arena/) | | | |
31
+ | [code-cushman-001](https://arxiv.org/abs/2107.03374) | | [33.5](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
32
+ | [code-davinci-002](https://arxiv.org/abs/2207.10397v2) | | [65.8](https://arxiv.org/abs/2207.10397v2) | | |
33
+ | [codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) | | [29.3](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
34
+ | [codegen-16B-multi](https://huggingface.co/Salesforce/codegen-16B-multi) | | [18.3](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
35
+ | [codegx](http://keg.cs.tsinghua.edu.cn/codegeex/) | | [22.9](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
36
+ | [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b) | [944](https://lmsys.org/blog/2023-05-03-arena/) | | | |
37
+ | [eleuther-pythia-7b](https://huggingface.co/EleutherAI/pythia-6.9b) | | | [0.667](https://www.mosaicml.com/blog/mpt-7b) | [0.198](https://www.mosaicml.com/blog/mpt-7b) |
38
+ | [eleuther-pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) | | | [0.704](https://www.mosaicml.com/blog/mpt-7b) | [0.233](https://www.mosaicml.com/blog/mpt-7b) |
39
+ | [fastchat-t5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) | [951](https://lmsys.org/blog/2023-05-03-arena/) | | | |
40
+ | [gpt-3.5](https://arxiv.org/abs/2303.08774v3) | | [48.1](https://arxiv.org/abs/2303.08774v3) | | |
41
+ | [gpt-4](https://arxiv.org/abs/2303.08774v3) | | [67.0](https://arxiv.org/abs/2303.08774v3) | | |
42
+ | [gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) | | | [0.719](https://www.mosaicml.com/blog/mpt-7b) | [0.347](https://www.mosaicml.com/blog/mpt-7b) |
43
+ | [gptj-6b](https://huggingface.co/EleutherAI/gpt-j-6b) | | | [0.683](https://www.mosaicml.com/blog/mpt-7b) | [0.234](https://www.mosaicml.com/blog/mpt-7b) |
44
+ | [koala-13b](https://bair.berkeley.edu/blog/2023/04/03/koala/) | [1082](https://lmsys.org/blog/2023-05-03-arena/) | | | |
45
+ | [llama-7b](https://arxiv.org/abs/2302.13971) | | [10.5](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [0.738](https://www.mosaicml.com/blog/mpt-7b) | [0.443](https://www.mosaicml.com/blog/mpt-7b) |
46
+ | [llama-13b](https://arxiv.org/abs/2302.13971) | [932](https://lmsys.org/blog/2023-05-03-arena/) | [15.8](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
47
+ | [llama-33b](https://arxiv.org/abs/2302.13971) | | [21.7](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
48
+ | [llama-65b](https://arxiv.org/abs/2302.13971) | | [23.7](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
49
+ | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | | | [0.702](https://www.mosaicml.com/blog/mpt-7b) | [0.343](https://www.mosaicml.com/blog/mpt-7b) |
50
+ | [oasst-pythia-12b](https://huggingface.co/OpenAssistant/pythia-12b-pre-v8-12.5k-steps) | [1065](https://lmsys.org/blog/2023-05-03-arena/) | | | |
51
+ | [opt-7b](https://huggingface.co/facebook/opt-6.7b) | | | [0.677](https://www.mosaicml.com/blog/mpt-7b) | [0.227](https://www.mosaicml.com/blog/mpt-7b) |
52
+ | [opt-13b](https://huggingface.co/facebook/opt-13b) | | | [0.692](https://www.mosaicml.com/blog/mpt-7b) | [0.282](https://www.mosaicml.com/blog/mpt-7b) |
53
+ | [palm-540b](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html) | | [26.2](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
54
+ | [stablelm-base-alpha-7b](https://huggingface.co/stabilityai/stablelm-base-alpha-7b) | | | [0.533](https://www.mosaicml.com/blog/mpt-7b) | [0.049](https://www.mosaicml.com/blog/mpt-7b) |
55
+ | [stablelm-tuned-alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) | [858](https://lmsys.org/blog/2023-05-03-arena/) | | | |
56
+ | [starcoder-base-16B](https://huggingface.co/bigcode/starcoderbase) | | [30.4](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
57
+ | [starcoder-16B](https://huggingface.co/bigcode/starcoder) | | [33.6](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
58
+ | [starcoder-16B (prompted)](https://huggingface.co/bigcode/starcoder) | | [40.8](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | | |
59
+ | [vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0) | [1169](https://lmsys.org/blog/2023-05-03-arena/) | | | |
60
 
61
  ## Benchmarks
62
 
63
  | Benchmark Name | Author | Link | Description |
64
  | ----------------- | -------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
65
  | Chatbot Arena Elo | LMSYS | https://lmsys.org/blog/2023-05-03-arena/ | "In this blog post, we introduce Chatbot Arena, an LLM benchmark platform featuring anonymous randomized battles in a crowdsourced manner. Chatbot Arena adopts the Elo rating system, which is a widely-used rating system in chess and other competitive games." (Source: https://lmsys.org/blog/2023-05-03-arena/) |
66
+ | HumanEval | Chen et al. | https://arxiv.org/abs/2107.03374v2 | "It used to measure functional correctness for synthesizing programs from docstrings. It consists of 164 original programming problems, assessing language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions." (Source: https://paperswithcode.com/dataset/humaneval) |
67
  | LAMBADA | Paperno et al. | https://arxiv.org/abs/1606.06031 | "The LAMBADA evaluates the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse." (Source: https://huggingface.co/datasets/lambada) |
68
  | TriviaQA | Joshi et al. | https://arxiv.org/abs/1705.03551v2 | "We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions." (Source: https://arxiv.org/abs/1705.03551v2) |
69
 
 
73
  | -------- | ---------------------------------------- |
74
  | LMSYS | https://lmsys.org/blog/2023-05-03-arena/ |
75
  | MOSAICML | https://www.mosaicml.com/blog/mpt-7b |
76
+ | BigCode | https://www.bigcode-project.org/ |