Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -3,17 +3,9 @@ license: apache-2.0
|
|
3 |
---
|
4 |
# CodeEditorBench
|
5 |
|
6 |
-
[**๐ Homepage**](https://codeeditorbench.github.io/) | [**๐ค Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**๐ arXiv**]() | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
|
7 |
|
8 |
|
9 |
-
|
10 |
-
<!-- This repo contains the evaluation code for the paper "[MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI](https://arxiv.org/pdf/2311.16502.pdf)" -->
|
11 |
-
|
12 |
-
<!-- ## ๐News
|
13 |
-
|
14 |
-
- **๐[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!๐**
|
15 |
-
- **๐ฅ[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! ๐** -->
|
16 |
-
|
17 |
## Introduction
|
18 |
Large Language Models (LLMs) for code are rapidly evolving, with code editing
|
19 |
emerging as a critical capability. We introduce CodeEditorBench, a pioneering
|
@@ -34,44 +26,21 @@ LLMs in code editing and provide a valuable resource for researchers and practi-
|
|
34 |
tioners in the field.
|
35 |
|
36 |
![Alt text](tech_route.png)
|
37 |
-
<!-- ## Dataset Creation
|
38 |
-
See [**GitHub**]() for the specific inference process
|
39 |
|
|
|
40 |
|
41 |
-
|
42 |
-
|
|
|
|
|
|
|
|
|
43 |
|
44 |
-
|
45 |
-
<!-- | Model | Zero Shot | Three Shot |
|
46 |
-
|----------------------------|:---------:|:----------:|
|
47 |
-
| Gemini Ultra | **59.4** | -- |
|
48 |
-
| GPT-4 | 56.8 | **55.7** | -->
|
49 |
-
|
50 |
-
| Model | Size | Open | Debug | Translate | Switch | Polish | Win Rate |
|
51 |
-
|-----------------|------|------|---------------------|---------------------|--------|------------------|--------------------|
|
52 |
-
| **Zero-shot** |
|
53 |
-
| Gemini-Ultra | - | โ | 0.301 (0.457) | 0.338 (0.261) | 0.028 | 4.64% (3.45%) | **0.779 (0.632)** |
|
54 |
-
| GPT-4 | - | โ | **0.308 (0.489)** | 0.335 (**0.449**) | **0.225** | 0.15% (0.87%) | **0.779 (0.882)** |
|
55 |
-
| GPT-3.5-Turbo | - | โ | 0.284 (**0.491**) | **0.385 (0.443)** | 0.169 | 0.09% (0.84%) | 0.765 (0.853) |
|
56 |
-
| Gemini-Pro | - | โ | 0.279 (0.420) | 0.200 (0.285) | 0.061 | **5.07% (6.27%)**| 0.750 (0.765) |
|
57 |
-
| DS-33B-INST | 33B | โ
| 0.267 (0.483) | 0.353 (0.427) | 0.131 | 0.06% (0.64%) | 0.676 (0.728) |
|
58 |
-
| WC-33B | 33B | โ
| 0.265 (0.483) | 0.315 (0.415) | 0.125 | 0.19% (0.62%) | 0.676 (0.669) |
|
59 |
-
| ... | ... | ... | ... | ... | ... | ... | ... |
|
60 |
-
| **Few-shot** | | | | | | | |
|
61 |
-
| Gemini-Ultra | - | โ | 0.283 (0.446) | 0.406 (0.292) | 0.131 | **4.83% (4.17%)**| **0.897 (0.706)** |
|
62 |
-
| GPT-4 | - | โ | **0.336 (0.519)** | **0.453 (0.488)** | **0.275** | 0.22% (0.7%) | 0.868 (**0.926**) |
|
63 |
-
| ... | ... | ... | ... | ... | ... | ... | ... |
|
64 |
-
| **CoT** | | | | | | | |
|
65 |
-
| GPT-4 | - | โ | **0.280 (0.439)** | **0.338 (0.414)** | **0.174** | 0.33% (1.45%) | **0.850 (0.800)** |
|
66 |
-
| GLM-4 | - | โ | 0.228 (0.201) | 0.218 (0.260) | 0.072 | **4.09% (5.28%)**| 0.750 (0.600) |
|
67 |
-
| ... | ... | ... | ... | ... | ... | ... | ... |
|
68 |
|
69 |
๐ฏAll results of models are generated by greedy decoding.
|
70 |
|
71 |
โจCode Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
|
72 |
-
|
73 |
-
๐๏ธValues outside parentheses denoting Plus results and inside denoting Primary results. For the Switch class, Primary and Plus results are identical, and only one score is displayed.
|
74 |
-
|
75 |
## Disclaimers
|
76 |
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
|
77 |
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
|
@@ -88,10 +57,12 @@ Should you encounter any data samples potentially breaching the copyright or lic
|
|
88 |
|
89 |
**BibTeX:**
|
90 |
```bibtex
|
91 |
-
@
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
|
|
|
|
96 |
}
|
97 |
```
|
|
|
3 |
---
|
4 |
# CodeEditorBench
|
5 |
|
6 |
+
[**๐ Homepage**](https://codeeditorbench.github.io/) | [**๐ค Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**๐ arXiv**](https://arxiv.org/pdf/2404.03543.pdf) | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
|
7 |
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
## Introduction
|
10 |
Large Language Models (LLMs) for code are rapidly evolving, with code editing
|
11 |
emerging as a critical capability. We introduce CodeEditorBench, a pioneering
|
|
|
26 |
tioners in the field.
|
27 |
|
28 |
![Alt text](tech_route.png)
|
|
|
|
|
29 |
|
30 |
+
## Results
|
31 |
|
32 |
+
<div style="text-align: center;">
|
33 |
+
<img src="Models_Zero_Shot.png" class="result"
|
34 |
+
width="45%" />
|
35 |
+
<img src="win_rate_zero.png" class="result"
|
36 |
+
width="45%" />
|
37 |
+
</div>
|
38 |
|
39 |
+
We propose evaluating LLMs across four scenarios capturing various code editing capabilities, namely code debug, code translate, code polish, and code requirement switch.The figure in left depicts various model performances across the four scenarios available in CodeEditorBench\_Plus in a radial plot โ highlighting how relative differences across models change across the scenarios. We also give the Performance of open-source and closed-source models on CodeEditorBench\_Plus in zero-shot evaluated through win\_rate in the right figure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
๐ฏAll results of models are generated by greedy decoding.
|
42 |
|
43 |
โจCode Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
|
|
|
|
|
|
|
44 |
## Disclaimers
|
45 |
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
|
46 |
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
|
|
|
57 |
|
58 |
**BibTeX:**
|
59 |
```bibtex
|
60 |
+
@misc{guo2024codeeditorbench,
|
61 |
+
title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models},
|
62 |
+
author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Yizhi LI and Ruibo Liu and Yue Wang and Shuyue Guo and Xingwei Qu and Xiang Yue and Ge Zhang and Wenhu Chen and Jie Fu},
|
63 |
+
year={2024},
|
64 |
+
eprint={2404.03543},
|
65 |
+
archivePrefix={arXiv},
|
66 |
+
primaryClass={cs.SE}
|
67 |
}
|
68 |
```
|