Datasets:

Modalities:
Image
ArXiv:
License:
KerwinJob commited on
Commit
9bd3f75
1 Parent(s): 9bba1c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -17
README.md CHANGED
@@ -7,23 +7,8 @@ license: apache-2.0
7
 
8
 
9
  ## Introduction
10
- Large Language Models (LLMs) for code are rapidly evolving, with code editing
11
- emerging as a critical capability. We introduce CodeEditorBench, a pioneering
12
- evaluation framework designed to rigorously assess the performance of LLMs
13
- in code editing tasks, including debugging, translating, polishing, and require-
14
- ment switching. Unlike existing benchmarks focusing solely on code generation,
15
- CodeEditorBench emphasizes real-world scenarios and practical aspects of software
16
- development. We curated diverse coding challenges and scenarios from five sources,
17
- covering various programming languages, complexity levels, and editing tasks.
18
- Evaluating 19 LLMs revealed that closed-source models, particularly Gemini-Ultra
19
- and GPT-4, outperform open-source models in CodeEditorBench, highlighting
20
- differences in model performance based on problem type and prompt sensitivity.
21
- CodeEditorBench aims to catalyze advancements in LLMs by providing a robust
22
- platform for assessing code editing capabilities. We will release all prompts and
23
- datasets to enable the community to expand the dataset and benchmark emerging
24
- LLMs. By introducing CodeEditorBench, we contribute to the advancement of
25
- LLMs in code editing and provide a valuable resource for researchers and practi-
26
- tioners in the field.
27
 
28
  ![Alt text](tech_route.png)
29
 
 
7
 
8
 
9
  ## Introduction
10
+ Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities.
11
+ CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ![Alt text](tech_route.png)
14