Datasets:

Modalities:
Image
ArXiv:
License:
KerwinJob commited on
Commit
8eb9b84
โ€ข
1 Parent(s): fb16c81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md CHANGED
@@ -1,3 +1,100 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ ---
5
+ license: apache-2.0
6
+ ---
7
+ # CodeEditorBench
8
+
9
+ [**๐ŸŒ Homepage**](https://codeeditorbench.github.io/) | [**๐Ÿค— Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**๐Ÿ“– arXiv**]() | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
10
+
11
+
12
+
13
+ <!-- This repo contains the evaluation code for the paper "[MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI](https://arxiv.org/pdf/2311.16502.pdf)" -->
14
+
15
+ <!-- ## ๐Ÿ””News
16
+
17
+ - **๐Ÿš€[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!๐ŸŒŸ**
18
+ - **๐Ÿ”ฅ[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! ๐Ÿ˜†** -->
19
+
20
+ ## Introduction
21
+ Large Language Models (LLMs) for code are rapidly evolving, with code editing
22
+ emerging as a critical capability. We introduce CodeEditorBench, a pioneering
23
+ evaluation framework designed to rigorously assess the performance of LLMs
24
+ in code editing tasks, including debugging, translating, polishing, and require-
25
+ ment switching. Unlike existing benchmarks focusing solely on code generation,
26
+ CodeEditorBench emphasizes real-world scenarios and practical aspects of software
27
+ development. We curated diverse coding challenges and scenarios from five sources,
28
+ covering various programming languages, complexity levels, and editing tasks.
29
+ Evaluating 19 LLMs revealed that closed-source models, particularly Gemini-Ultra
30
+ and GPT-4, outperform open-source models in CodeEditorBench, highlighting
31
+ differences in model performance based on problem type and prompt sensitivity.
32
+ CodeEditorBench aims to catalyze advancements in LLMs by providing a robust
33
+ platform for assessing code editing capabilities. We will release all prompts and
34
+ datasets to enable the community to expand the dataset and benchmark emerging
35
+ LLMs. By introducing CodeEditorBench, we contribute to the advancement of
36
+ LLMs in code editing and provide a valuable resource for researchers and practi-
37
+ tioners in the field.
38
+
39
+ ![Alt text](tech_route.png)
40
+ <!-- ## Dataset Creation
41
+ See [**GitHub**]() for the specific inference process
42
+
43
+
44
+ ## Evaluation
45
+ See [**GitHub**]() for the specific evaluation process -->
46
+
47
+ ## ๐Ÿ† Leaderboard
48
+ <!-- | Model | Zero Shot | Three Shot |
49
+ |----------------------------|:---------:|:----------:|
50
+ | Gemini Ultra | **59.4** | -- |
51
+ | GPT-4 | 56.8 | **55.7** | -->
52
+
53
+ | Model | Size | Open | Debug | Translate | Switch | Polish | Win Rate |
54
+ |-----------------|------|------|---------------------|---------------------|--------|------------------|--------------------|
55
+ | **Zero-shot** |
56
+ | Gemini-Ultra | - | โŒ | 0.301 (0.457) | 0.338 (0.261) | 0.028 | 4.64% (3.45%) | **0.779 (0.632)** |
57
+ | GPT-4 | - | โŒ | **0.308 (0.489)** | 0.335 (**0.449**) | **0.225** | 0.15% (0.87%) | **0.779 (0.882)** |
58
+ | GPT-3.5-Turbo | - | โŒ | 0.284 (**0.491**) | **0.385 (0.443)** | 0.169 | 0.09% (0.84%) | 0.765 (0.853) |
59
+ | Gemini-Pro | - | โŒ | 0.279 (0.420) | 0.200 (0.285) | 0.061 | **5.07% (6.27%)**| 0.750 (0.765) |
60
+ | DS-33B-INST | 33B | โœ… | 0.267 (0.483) | 0.353 (0.427) | 0.131 | 0.06% (0.64%) | 0.676 (0.728) |
61
+ | WC-33B | 33B | โœ… | 0.265 (0.483) | 0.315 (0.415) | 0.125 | 0.19% (0.62%) | 0.676 (0.669) |
62
+ | ... | ... | ... | ... | ... | ... | ... | ... |
63
+ | **Few-shot** | | | | | | | |
64
+ | Gemini-Ultra | - | โŒ | 0.283 (0.446) | 0.406 (0.292) | 0.131 | **4.83% (4.17%)**| **0.897 (0.706)** |
65
+ | GPT-4 | - | โŒ | **0.336 (0.519)** | **0.453 (0.488)** | **0.275** | 0.22% (0.7%) | 0.868 (**0.926**) |
66
+ | ... | ... | ... | ... | ... | ... | ... | ... |
67
+ | **CoT** | | | | | | | |
68
+ | GPT-4 | - | โŒ | **0.280 (0.439)** | **0.338 (0.414)** | **0.174** | 0.33% (1.45%) | **0.850 (0.800)** |
69
+ | GLM-4 | - | โŒ | 0.228 (0.201) | 0.218 (0.260) | 0.072 | **4.09% (5.28%)**| 0.750 (0.600) |
70
+ | ... | ... | ... | ... | ... | ... | ... | ... |
71
+
72
+ ๐ŸŽฏAll results of models are generated by greedy decoding.
73
+
74
+ โœจCode Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
75
+
76
+ ๐Ÿ—‚๏ธValues outside parentheses denoting Plus results and inside denoting Primary results. For the Switch class, Primary and Plus results are identical, and only one score is displayed.
77
+
78
+ ## Disclaimers
79
+ The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
80
+ Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
81
+
82
+ ## Contact
83
+ <!-- - Jiawei Guo: moriatysss152@gmail.com
84
+ - Ziming Li :
85
+ - Xueling Liu:
86
+ - Kaijing Ma: -->
87
+ - Ge Zhang: zhangge@01.ai
88
+ - Wenhu Chen: wenhuchen@uwaterloo.ca
89
+ - Jie Fu: jiefu@ust.hk
90
+ ## Citation
91
+
92
+ **BibTeX:**
93
+ ```bibtex
94
+ @inproceedings{guo2024editorbench,
95
+ title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models},
96
+ author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Ruibo Liu and Yue Wang and Yizhi Li and Xingwei Qu and Xiang Yue and Shuyue Guo and Ge Zhang and Wenhu Chen and Jie Fu},
97
+ booktitle={arxiv},
98
+ year={2024},
99
+ }
100
+ ```