Datasets:

Modalities:
Text
Languages:
code
ArXiv:
Tags:
License:
tianyang commited on
Commit
514d49e
1 Parent(s): 87b96d1

update readme

Browse files
Files changed (2) hide show
  1. README.md +60 -92
  2. repobench-r.py +8 -8
README.md CHANGED
@@ -18,116 +18,84 @@ task_ids:
18
 
19
  # Dataset Card for RepoBench-R
20
 
21
- ## Table of Contents
22
- - [Dataset Card for RepoBench-R](#dataset-card-for-repobench-r)
23
- - [Table of Contents](#table-of-contents)
24
- - [Dataset Description](#dataset-description)
25
- - [Dataset Summary](#dataset-summary)
26
- - [Supported Tasks](#supported-tasks)
27
- - [Dataset Structure](#dataset-structure)
28
- - [Dataset Creation](#dataset-creation)
29
- - [Curation Rationale](#curation-rationale)
30
- - [Source Data](#source-data)
31
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
32
- - [Who are the source language producers?](#who-are-the-source-language-producers)
33
- - [Annotations](#annotations)
34
- - [Annotation process](#annotation-process)
35
- - [Who are the annotators?](#who-are-the-annotators)
36
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
37
- - [Considerations for Using the Data](#considerations-for-using-the-data)
38
- - [Social Impact of Dataset](#social-impact-of-dataset)
39
- - [Discussion of Biases](#discussion-of-biases)
40
- - [Other Known Limitations](#other-known-limitations)
41
- - [Additional Information](#additional-information)
42
- - [Licensing Information](#licensing-information)
43
- - [Citation Information](#citation-information)
44
- - [Contributions](#contributions)
45
-
46
  ## Dataset Description
47
 
48
  - **Homepage:** https://github.com/Leolty/repobench
49
  - **Paper:** https://arxiv.org/abs/2306.03091
50
 
51
- ### Dataset Summary
52
 
53
- RepoBench-R is a subtask of [RepoBench](https://github.com/Leolty/repobench), targeting the retrieval component of a repository-level auto-completion
54
- system, focusing on extracting the most relevant code snippet from a project repository for next-line
55
  code prediction.
56
 
57
- ### Supported Tasks
58
-
59
- The dataset supports two programming languages, Python and Java, and contains two settings:
60
-
61
- - `cff`: short for `cross_file_first`, where the cross-file module in next line is first used in the current file.
62
- - `cfr`: short for `cross_file_random`, where the cross-file module in next line is not first used in the current file.
63
-
64
- For each setting, we provide `train` and `test` subset, and each subset with two levels of difficulty: `easy` and `hard`.
65
-
66
- Suppose the number of code snippets in the context is \\(k\\),
67
-
68
- - For the `easy` subset, we have \\(5 \leq k < 10\\).
69
- - For the `hard` subset, we have \\(k \geq 10\\).
70
-
71
-
72
- ## Dataset Structure
73
-
74
-
75
-
76
- ## Dataset Creation
77
-
78
- ### Curation Rationale
79
-
80
- [More Information Needed]
81
-
82
- ### Source Data
83
 
84
- #### Initial Data Collection and Normalization
85
 
86
- [More Information Needed]
87
 
88
- #### Who are the source language producers?
89
 
90
- [More Information Needed]
91
 
92
- ### Annotations
 
 
 
93
 
94
- #### Annotation process
95
 
96
- [More Information Needed]
 
 
 
97
 
98
- #### Who are the annotators?
99
 
100
- [More Information Needed]
101
 
102
- ### Personal and Sensitive Information
 
103
 
104
- [More Information Needed]
 
105
 
106
- ## Considerations for Using the Data
107
-
108
- ### Social Impact of Dataset
109
-
110
- [More Information Needed]
111
-
112
- ### Discussion of Biases
113
-
114
- [More Information Needed]
115
-
116
- ### Other Known Limitations
117
-
118
- [More Information Needed]
119
-
120
- ## Additional Information
121
-
122
-
123
- ### Licensing Information
124
-
125
- [More Information Needed]
126
-
127
- ### Citation Information
128
-
129
- [More Information Needed]
130
-
131
- ### Contributions
132
 
133
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  # Dataset Card for RepoBench-R
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Dataset Description
22
 
23
  - **Homepage:** https://github.com/Leolty/repobench
24
  - **Paper:** https://arxiv.org/abs/2306.03091
25
 
26
+ ## Dataset Summary
27
 
28
+ RepoBench-R is a subtask of [RepoBench](https://github.com/Leolty/repobench), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line
 
29
  code prediction.
30
 
31
+ ## Settings
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ - `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
34
 
35
+ - `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
36
 
37
+ ## Supported Tasks
38
 
39
+ The dataset has 4 subsets:
40
 
41
+ - `python-cff`: python dataset with `cff` setting.
42
+ - `python-cfr`: python dataset with `cfr` setting.
43
+ - `java-cff`: java dataset with `cff` setting.
44
+ - `java-cfr`: java dataset with `cfr` setting.
45
 
46
+ Each subset has 4 splits:
47
 
48
+ - `train-easy`: training set with easy difficulty, where the number of code snippets in the context $$k$$ satisfies $$ 5 \leq k < 10 $$.
49
+ - `train-hard`: training set with hard difficulty, where the number of code snippets in the context $$k$$ satisfies $$ k \geq 10 $$.
50
+ - `test-easy`: testing set with easy difficulty.
51
+ - `test-hard`: testing set with hard difficulty.
52
 
53
+ ## Loading Data
54
 
55
+ For example, if you want to load the `test` `cross-file-first` `python` dataset with `easy` difficulty, you can use the following code:
56
 
57
+ ```python
58
+ from datasets import load_dataset
59
 
60
+ dataset = load_dataset("tianyang/repobench-r", "python-cff", "test-easy")
61
+ ```
62
 
63
+ ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
+ ```
66
+ {
67
+ "repo_name": "repository name of the data point",
68
+ "file_path": "path/to/file",
69
+ "context": [
70
+ "snippet 1",
71
+ "snippet 2",
72
+ // ...
73
+ "snippet k"
74
+ ],
75
+ "import_statement": "all import statements in the file",
76
+ "gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1
77
+ "code": "the code for next-line prediction",
78
+ "next_line": "the next line of the code"
79
+ }
80
+ ```
81
+
82
+ ## Licensing Information
83
+
84
+ CC BY-NC-ND 4.0
85
+
86
+ ## Citation Information
87
+
88
+ ```bibtex
89
+ @misc{liu2023repobench,
90
+ title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
91
+ author={Tianyang Liu and Canwen Xu and Julian McAuley},
92
+ year={2023},
93
+ eprint={2306.03091},
94
+ archivePrefix={arXiv},
95
+ primaryClass={cs.CL}
96
+ }
97
+ ```
98
+
99
+ ## Contributions
100
+
101
+ Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset.
repobench-r.py CHANGED
@@ -117,20 +117,20 @@ class RepoBenchR(datasets.GeneratorBasedBuilder):
117
 
118
  return [
119
  datasets.SplitGenerator(
120
- name=datasets.Split("train_easy"),
121
- gen_kwargs={"data_dir": data_dir, "split": "train_easy"},
122
  ),
123
  datasets.SplitGenerator(
124
  name=datasets.Split("train_hard"),
125
- gen_kwargs={"data_dir": data_dir, "split": "train_hard"},
126
  ),
127
  datasets.SplitGenerator(
128
- name=datasets.Split("test_easy"),
129
- gen_kwargs={"data_dir": data_dir, "split": "test_easy"},
130
  ),
131
  datasets.SplitGenerator(
132
- name=datasets.Split("test_hard"),
133
- gen_kwargs={"data_dir": data_dir, "split": "test_hard"},
134
  )
135
  ]
136
 
@@ -139,7 +139,7 @@ class RepoBenchR(datasets.GeneratorBasedBuilder):
139
  with gzip.open(data_dir, "rb") as f:
140
  data = pickle.load(f)
141
 
142
- subset, level = split.split("_")
143
 
144
  for i, example in enumerate(data[subset][level]):
145
  yield i, {
 
117
 
118
  return [
119
  datasets.SplitGenerator(
120
+ name=datasets.Split("train-easy"),
121
+ gen_kwargs={"data_dir": data_dir, "split": "train-easy"},
122
  ),
123
  datasets.SplitGenerator(
124
  name=datasets.Split("train_hard"),
125
+ gen_kwargs={"data_dir": data_dir, "split": "train-hard"},
126
  ),
127
  datasets.SplitGenerator(
128
+ name=datasets.Split("test-easy"),
129
+ gen_kwargs={"data_dir": data_dir, "split": "test-easy"},
130
  ),
131
  datasets.SplitGenerator(
132
+ name=datasets.Split("test-hard"),
133
+ gen_kwargs={"data_dir": data_dir, "split": "test-hard"},
134
  )
135
  ]
136
 
 
139
  with gzip.open(data_dir, "rb") as f:
140
  data = pickle.load(f)
141
 
142
+ subset, level = split.split("-")
143
 
144
  for i, example in enumerate(data[subset][level]):
145
  yield i, {