loubnabnl HF staff commited on
Commit
23dc3e1
1 Parent(s): c4ec4a6

Update evaluation/intro.md

Browse files
Files changed (1) hide show
  1. evaluation/intro.md +2 -26
evaluation/intro.md CHANGED
@@ -1,31 +1,7 @@
1
  A natural way to evaluate code programs is to see if they pass unit tests, it is the idea behind the [pass@k](https://huggingface.co/metrics/code_eval) metric, a popular evaluation framework for code generation models, on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
2
- In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator. Table 1 below shows the HumanEval scores of CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source).
3
 
4
- <div align="center">
5
-
6
- Model | pass@1 | pass@10 | pass@100|
7
- |-------|--------|---------|---------|
8
- |CodeParrot (110M) | 3.80% | 6.57% | 12.78% |
9
- |CodeParrot (1.5B) | 3.58% | 8.03% | 14.96% |
10
- |||||
11
- |InCoder (6.7B) | 15.2% | 27.8% | 47.00% |
12
- |||||
13
- |PolyCoder (160M)| 2.13% | 3.35% | 4.88% |
14
- |PolyCoder (400M)| 2.96% | 5.29% | 11.59% |
15
- |PolyCoder (2.7B)| 5.59% | 9.84% | 17.68% |
16
- |||||
17
- |CodeGen-Mono (350M)| 12.76% | 23.11% | 35.19% |
18
- |CodeGen-Mono (2.7B)| 23.70% | 36.64% | 57.01% |
19
- |CodeGen-Mono (6.1B)| 26.13% | 42.29% | 65.82% |
20
- |CodeGen-Mono (16.1B)| **29.28%** | **49.86%** | **75.00%** |
21
- |||||
22
- |Codex (25M)| 3.21% | 7.1% | 12.89%|
23
- |Codex (300M)| 13.17%| 20.37% | 36.27% |
24
- |Codex (12B)| 28.81%| 46.81% | 72.31% |
25
-
26
- </div>
27
-
28
- For better visualization, we plot the pass@100 for the models above by model size.
29
  <p align="center">
30
  <img src="https://huggingface.co/datasets/loubnabnl/repo-images/resolve/main/plot_pass@100.png" alt="drawing" width="550"/>
31
  </p>
 
1
  A natural way to evaluate code programs is to see if they pass unit tests, it is the idea behind the [pass@k](https://huggingface.co/metrics/code_eval) metric, a popular evaluation framework for code generation models, on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
2
+ In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator.
3
 
4
+ This plot shows the pass@100 by model size, for CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  <p align="center">
6
  <img src="https://huggingface.co/datasets/loubnabnl/repo-images/resolve/main/plot_pass@100.png" alt="drawing" width="550"/>
7
  </p>