Datasets:

ArXiv:
License:
KerwinJob commited on
Commit
abb9471
โ€ข
1 Parent(s): 71a18c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -45
README.md CHANGED
@@ -3,17 +3,9 @@ license: apache-2.0
3
  ---
4
  # CodeEditorBench
5
 
6
- [**๐ŸŒ Homepage**](https://codeeditorbench.github.io/) | [**๐Ÿค— Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**๐Ÿ“– arXiv**]() | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
7
 
8
 
9
-
10
- <!-- This repo contains the evaluation code for the paper "[MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI](https://arxiv.org/pdf/2311.16502.pdf)" -->
11
-
12
- <!-- ## ๐Ÿ””News
13
-
14
- - **๐Ÿš€[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!๐ŸŒŸ**
15
- - **๐Ÿ”ฅ[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! ๐Ÿ˜†** -->
16
-
17
  ## Introduction
18
  Large Language Models (LLMs) for code are rapidly evolving, with code editing
19
  emerging as a critical capability. We introduce CodeEditorBench, a pioneering
@@ -34,44 +26,21 @@ LLMs in code editing and provide a valuable resource for researchers and practi-
34
  tioners in the field.
35
 
36
  ![Alt text](tech_route.png)
37
- <!-- ## Dataset Creation
38
- See [**GitHub**]() for the specific inference process
39
 
 
40
 
41
- ## Evaluation
42
- See [**GitHub**]() for the specific evaluation process -->
 
 
 
 
43
 
44
- ## ๐Ÿ† Leaderboard
45
- <!-- | Model | Zero Shot | Three Shot |
46
- |----------------------------|:---------:|:----------:|
47
- | Gemini Ultra | **59.4** | -- |
48
- | GPT-4 | 56.8 | **55.7** | -->
49
-
50
- | Model | Size | Open | Debug | Translate | Switch | Polish | Win Rate |
51
- |-----------------|------|------|---------------------|---------------------|--------|------------------|--------------------|
52
- | **Zero-shot** |
53
- | Gemini-Ultra | - | โŒ | 0.301 (0.457) | 0.338 (0.261) | 0.028 | 4.64% (3.45%) | **0.779 (0.632)** |
54
- | GPT-4 | - | โŒ | **0.308 (0.489)** | 0.335 (**0.449**) | **0.225** | 0.15% (0.87%) | **0.779 (0.882)** |
55
- | GPT-3.5-Turbo | - | โŒ | 0.284 (**0.491**) | **0.385 (0.443)** | 0.169 | 0.09% (0.84%) | 0.765 (0.853) |
56
- | Gemini-Pro | - | โŒ | 0.279 (0.420) | 0.200 (0.285) | 0.061 | **5.07% (6.27%)**| 0.750 (0.765) |
57
- | DS-33B-INST | 33B | โœ… | 0.267 (0.483) | 0.353 (0.427) | 0.131 | 0.06% (0.64%) | 0.676 (0.728) |
58
- | WC-33B | 33B | โœ… | 0.265 (0.483) | 0.315 (0.415) | 0.125 | 0.19% (0.62%) | 0.676 (0.669) |
59
- | ... | ... | ... | ... | ... | ... | ... | ... |
60
- | **Few-shot** | | | | | | | |
61
- | Gemini-Ultra | - | โŒ | 0.283 (0.446) | 0.406 (0.292) | 0.131 | **4.83% (4.17%)**| **0.897 (0.706)** |
62
- | GPT-4 | - | โŒ | **0.336 (0.519)** | **0.453 (0.488)** | **0.275** | 0.22% (0.7%) | 0.868 (**0.926**) |
63
- | ... | ... | ... | ... | ... | ... | ... | ... |
64
- | **CoT** | | | | | | | |
65
- | GPT-4 | - | โŒ | **0.280 (0.439)** | **0.338 (0.414)** | **0.174** | 0.33% (1.45%) | **0.850 (0.800)** |
66
- | GLM-4 | - | โŒ | 0.228 (0.201) | 0.218 (0.260) | 0.072 | **4.09% (5.28%)**| 0.750 (0.600) |
67
- | ... | ... | ... | ... | ... | ... | ... | ... |
68
 
69
  ๐ŸŽฏAll results of models are generated by greedy decoding.
70
 
71
  โœจCode Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
72
-
73
- ๐Ÿ—‚๏ธValues outside parentheses denoting Plus results and inside denoting Primary results. For the Switch class, Primary and Plus results are identical, and only one score is displayed.
74
-
75
  ## Disclaimers
76
  The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
77
  Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
@@ -88,10 +57,12 @@ Should you encounter any data samples potentially breaching the copyright or lic
88
 
89
  **BibTeX:**
90
  ```bibtex
91
- @inproceedings{guo2024editorbench,
92
- title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models},
93
- author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Ruibo Liu and Yue Wang and Yizhi Li and Xingwei Qu and Xiang Yue and Shuyue Guo and Ge Zhang and Wenhu Chen and Jie Fu},
94
- booktitle={arxiv},
95
- year={2024},
 
 
96
  }
97
  ```
 
3
  ---
4
  # CodeEditorBench
5
 
6
+ [**๐ŸŒ Homepage**](https://codeeditorbench.github.io/) | [**๐Ÿค— Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**๐Ÿ“– arXiv**](https://arxiv.org/pdf/2404.03543.pdf) | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
7
 
8
 
 
 
 
 
 
 
 
 
9
  ## Introduction
10
  Large Language Models (LLMs) for code are rapidly evolving, with code editing
11
  emerging as a critical capability. We introduce CodeEditorBench, a pioneering
 
26
  tioners in the field.
27
 
28
  ![Alt text](tech_route.png)
 
 
29
 
30
+ ## Results
31
 
32
+ <div style="text-align: center;">
33
+ <img src="Models_Zero_Shot.png" class="result"
34
+ width="45%" />
35
+ <img src="win_rate_zero.png" class="result"
36
+ width="45%" />
37
+ </div>
38
 
39
+ We propose evaluating LLMs across four scenarios capturing various code editing capabilities, namely code debug, code translate, code polish, and code requirement switch.The figure in left depicts various model performances across the four scenarios available in CodeEditorBench\_Plus in a radial plot โ€“ highlighting how relative differences across models change across the scenarios. We also give the Performance of open-source and closed-source models on CodeEditorBench\_Plus in zero-shot evaluated through win\_rate in the right figure.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ๐ŸŽฏAll results of models are generated by greedy decoding.
42
 
43
  โœจCode Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
 
 
 
44
  ## Disclaimers
45
  The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
46
  Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
 
57
 
58
  **BibTeX:**
59
  ```bibtex
60
+ @misc{guo2024codeeditorbench,
61
+ title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models},
62
+ author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Yizhi LI and Ruibo Liu and Yue Wang and Shuyue Guo and Xingwei Qu and Xiang Yue and Ge Zhang and Wenhu Chen and Jie Fu},
63
+ year={2024},
64
+ eprint={2404.03543},
65
+ archivePrefix={arXiv},
66
+ primaryClass={cs.SE}
67
  }
68
  ```