Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
shanchao commited on
Commit
301e3f8
1 Parent(s): b7a11ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -30
README.md CHANGED
@@ -1,30 +1,85 @@
1
- ---
2
- license: bsd-3-clause-clear
3
- dataset_info:
4
- features:
5
- - name: repository
6
- dtype: string
7
- - name: repo_id
8
- dtype: string
9
- - name: target_module_path
10
- dtype: string
11
- - name: prompt
12
- dtype: string
13
- - name: relavent_test_path
14
- dtype: string
15
- - name: full_function
16
- dtype: string
17
- - name: function_name
18
- dtype: string
19
- splits:
20
- - name: train
21
- num_bytes: 5410189
22
- num_examples: 980
23
- download_size: 2045590
24
- dataset_size: 5410189
25
- configs:
26
- - config_name: default
27
- data_files:
28
- - split: train
29
- path: data/train-*
30
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause-clear
3
+ dataset_info:
4
+ features:
5
+ - name: repository
6
+ dtype: string
7
+ - name: repo_id
8
+ dtype: string
9
+ - name: target_module_path
10
+ dtype: string
11
+ - name: prompt
12
+ dtype: string
13
+ - name: relavent_test_path
14
+ dtype: string
15
+ - name: full_function
16
+ dtype: string
17
+ - name: function_name
18
+ dtype: string
19
+ splits:
20
+ - name: train
21
+ num_bytes: 5410189
22
+ num_examples: 980
23
+ download_size: 2045590
24
+ dataset_size: 5410189
25
+ configs:
26
+ - config_name: default
27
+ data_files:
28
+ - split: train
29
+ path: data/train-*
30
+ ---
31
+ # Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'
32
+
33
+ Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.
34
+
35
+ To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.
36
+
37
+ ## Usage
38
+
39
+ ```
40
+ from datasets import load_dataset
41
+
42
+ data = load_dataset('lt-asset/REPOCOD')
43
+ print(data)
44
+
45
+ DatasetDict({
46
+ train: Dataset({
47
+ features: ['repository', 'repo_id', 'target_module_path', 'prompt', 'relavent_test_path', 'full_function', 'function_name'],
48
+ num_rows: 980
49
+ })
50
+ })
51
+ ```
52
+
53
+ ## Data Fields
54
+ - repository: the source repository of the current sample
55
+ - repo_id: the unique index of the sample in the corresponding source repository
56
+ - target_module_path: the file path containing the current sample relative to the root of the source repository
57
+ - prompt: the developer provided function signature and docstring
58
+ - relavent_test_path: the path to the relevant test cases
59
+ - full_function: the canonical solution of the current sample
60
+ - function_name: the name of the target function (current sample)
61
+
62
+ ## Example
63
+
64
+ ```
65
+ "repository": "seaborn", # collected from seaborn
66
+ "repo_id": "0", # first sample from seaborn
67
+ "target_module_path": "seaborn/_core/scales.py", # the target function is in this path
68
+ "prompt": " def label(
69
+ self,
70
+ formatter: Formatter | None = None, *,
71
+ like: str | Callable | None = None,
72
+ base: int | None | Default = default,
73
+ unit: str | None = None,
74
+ ) -> Continuous: ....", # the function signature and docstring for the target function
75
+ "relevant_test_path": "/usr/src/app/target_test_cases/failed_tests_Continuous.label.txt", # Path to relevant tests for the function
76
+ "full_function": " def label(
77
+ self,
78
+ formatter: Formatter | None = None, *,
79
+ like: str | Callable | None = None,
80
+ base: int | None | Default = default,
81
+ unit: str | None = None,
82
+ ) -> Continuous: ....", # the full snippet of the target function, including the function signature and docstring for the target function
83
+ "function_name": "Continuous.label" # The name of the target function
84
+
85
+ ```