File size: 3,788 Bytes
cc62637
 
 
8eb9b84
 
abb9471
8eb9b84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abb9471
8eb9b84
9bba1c7
 
 
abb9471
8eb9b84
abb9471
8eb9b84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abb9471
 
 
 
 
 
 
8eb9b84
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: apache-2.0
---
# CodeEditorBench 

[**🌐 Homepage**](https://codeeditorbench.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**📖 arXiv**](https://arxiv.org/pdf/2404.03543.pdf) | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)


## Introduction
Large Language Models (LLMs) for code are rapidly evolving, with code editing
emerging as a critical capability. We introduce CodeEditorBench, a pioneering
evaluation framework designed to rigorously assess the performance of LLMs
in code editing tasks, including debugging, translating, polishing, and require-
ment switching. Unlike existing benchmarks focusing solely on code generation,
CodeEditorBench emphasizes real-world scenarios and practical aspects of software
development. We curated diverse coding challenges and scenarios from five sources,
covering various programming languages, complexity levels, and editing tasks.
Evaluating 19 LLMs revealed that closed-source models, particularly Gemini-Ultra
and GPT-4, outperform open-source models in CodeEditorBench, highlighting
differences in model performance based on problem type and prompt sensitivity.
CodeEditorBench aims to catalyze advancements in LLMs by providing a robust
platform for assessing code editing capabilities. We will release all prompts and
datasets to enable the community to expand the dataset and benchmark emerging
LLMs. By introducing CodeEditorBench, we contribute to the advancement of
LLMs in code editing and provide a valuable resource for researchers and practi-
tioners in the field.

![Alt text](tech_route.png)

## Results

<div style="display: flex; justify-content: space-around; align-items: center;">
  <img src="Models_Zero_Shot.png" alt="First Image Description" style="width: 48%;" />
  <img src="win_rate_zero.png" alt="Second Image Description" style="width: 48%;" />
</div>

We propose evaluating LLMs across four scenarios capturing various code editing capabilities, namely code debug, code translate, code polish, and code requirement switch.The figure in left depicts various model performances across the four scenarios available in CodeEditorBench\_Plus in a radial plot – highlighting how relative differences across models change across the scenarios. We also give the Performance of open-source and closed-source models on CodeEditorBench\_Plus in zero-shot evaluated through win\_rate in the right figure.

🎯All results of models are generated by greedy decoding.

✨Code Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution. 
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.

## Contact
<!-- - Jiawei Guo: moriatysss152@gmail.com
- Ziming Li : 
- Xueling Liu: 
- Kaijing Ma: -->
- Ge Zhang: zhangge@01.ai
- Wenhu Chen: wenhuchen@uwaterloo.ca
- Jie Fu: jiefu@ust.hk
## Citation

**BibTeX:**
```bibtex
@misc{guo2024codeeditorbench,
      title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models}, 
      author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Yizhi LI and Ruibo Liu and Yue Wang and Shuyue Guo and Xingwei Qu and Xiang Yue and Ge Zhang and Wenhu Chen and Jie Fu},
      year={2024},
      eprint={2404.03543},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}
```