File size: 12,028 Bytes
4869c1b
 
181dedd
 
 
 
428732f
181dedd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
317c4ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5bb1a52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3db9aea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98cfe86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48e24ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
428732f
181dedd
 
 
 
 
 
 
 
317c4ac
 
 
 
 
 
 
 
5bb1a52
 
 
 
 
 
 
 
3db9aea
 
 
 
 
 
 
 
98cfe86
 
 
 
 
 
 
 
48e24ff
 
 
 
 
 
 
 
37b4b89
 
4869c1b
37b4b89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
---
license: apache-2.0
task_categories:
- text-classification
pretty_name: CuBERT ETH Py150 Benchmarks
arxiv: 2001.00059
dataset_info:
- config_name: exception_datasets
  features:
  - name: function
    dtype: string
  - name: label
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 25423003
    num_examples: 18480
  - name: dev
    num_bytes: 2845822
    num_examples: 2088
  - name: test
    num_bytes: 14064500
    num_examples: 10348
  download_size: 16935273
  dataset_size: 42333325
- config_name: function_docstring_datasets
  features:
  - name: function
    dtype: string
  - name: docstring
    dtype: string
  - name: label
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 261700491
    num_examples: 340846
  - name: dev
    num_bytes: 28498757
    num_examples: 37592
  - name: test
    num_bytes: 141660242
    num_examples: 186698
  download_size: 121724722
  dataset_size: 431859490
- config_name: swapped_operands_datasets
  features:
  - name: function
    dtype: string
  - name: label
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 271097336
    num_examples: 236246
  - name: dev
    num_bytes: 29986397
    num_examples: 26118
  - name: test
    num_bytes: 148544957
    num_examples: 130972
  download_size: 105243573
  dataset_size: 449628690
- config_name: variable_misuse_datasets
  features:
  - name: function
    dtype: string
  - name: label
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 474283355
    num_examples: 700708
  - name: dev
    num_bytes: 50447683
    num_examples: 75478
  - name: test
    num_bytes: 251591448
    num_examples: 378440
  download_size: 231302039
  dataset_size: 776322486
- config_name: variable_misuse_repair_datasets
  features:
  - name: function
    sequence: string
  - name: target_mask
    sequence: int64
  - name: error_location_mask
    sequence: int64
  - name: candidate_mask
    sequence: int64
  - name: provenance
    dtype: string
  splits:
  - name: train
    num_bytes: 4417505142
    num_examples: 700708
  - name: dev
    num_bytes: 469436314
    num_examples: 75478
  - name: test
    num_bytes: 2331355329
    num_examples: 378440
  download_size: 498300512
  dataset_size: 7218296785
- config_name: wrong_binary_operator_datasets
  features:
  - name: function
    dtype: string
  - name: label
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 439948844
    num_examples: 459400
  - name: dev
    num_bytes: 47620848
    num_examples: 49804
  - name: test
    num_bytes: 239409450
    num_examples: 251804
  download_size: 163088211
  dataset_size: 726979142
configs:
- config_name: exception_datasets
  data_files:
  - split: train
    path: exception_datasets/train-*
  - split: dev
    path: exception_datasets/dev-*
  - split: test
    path: exception_datasets/test-*
- config_name: function_docstring_datasets
  data_files:
  - split: train
    path: function_docstring_datasets/train-*
  - split: dev
    path: function_docstring_datasets/dev-*
  - split: test
    path: function_docstring_datasets/test-*
- config_name: swapped_operands_datasets
  data_files:
  - split: train
    path: swapped_operands_datasets/train-*
  - split: dev
    path: swapped_operands_datasets/dev-*
  - split: test
    path: swapped_operands_datasets/test-*
- config_name: variable_misuse_datasets
  data_files:
  - split: train
    path: variable_misuse_datasets/train-*
  - split: dev
    path: variable_misuse_datasets/dev-*
  - split: test
    path: variable_misuse_datasets/test-*
- config_name: variable_misuse_repair_datasets
  data_files:
  - split: train
    path: variable_misuse_repair_datasets/train-*
  - split: dev
    path: variable_misuse_repair_datasets/dev-*
  - split: test
    path: variable_misuse_repair_datasets/test-*
- config_name: wrong_binary_operator_datasets
  data_files:
  - split: train
    path: wrong_binary_operator_datasets/train-*
  - split: dev
    path: wrong_binary_operator_datasets/dev-*
  - split: test
    path: wrong_binary_operator_datasets/test-*
tags:
- code
---

# CuBERT ETH150 Open Benchmarks

This is an unofficial HuggingFace upload of the [CuBERT ETH150 Open Benchmarks](https://github.com/google-research/google-research/tree/master/cubert). This dataset was released along with [Learning and Evaluating Contextual Embedding of Source Code](https://arxiv.org/abs/2001.00059).

---

## Benchmarks and Fine-Tuned Models

Here we describe the 6 Python benchmarks we created. All 6 benchmarks were derived from [ETH Py150 Open](https://github.com/google-research-datasets/eth_py150_open). All examples are stored as sharded text files. Each text line corresponds to a separate example encoded as a JSON object. For each dataset, we release separate training/validation/testing splits along the same boundaries that ETH Py150 Open splits its files to the corresponding splits. The fine-tuned models are the checkpoints of each model with the highest validation accuracy.

1. **Function-docstring classification**. Combinations of functions with their correct or incorrect documentation string, used to train a classifier that can tell which pairs go together. The JSON fields are:
     * `function`: string, the source code of a function as text
     * `docstring`: string, the documentation string for that function. Note that the string is unquoted. To be able to properly tokenize it with the CuBERT tokenizers, you have to wrap it in quotes first. For example, in Python, use `string_to_tokenize = f'"""{docstring}"""'`.
     * `label`: string, one of (“Incorrect”, “Correct”), the label of the example.
     * `info`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, the function name and, for “Incorrect” examples, the function whose docstring was substituted.
1. **Exception classification**. Combinations of functions where one exception type has been masked, along with a label indicating the masked exception type. The JSON fields are:
     * `function`: string, the source code of a function as text, in which one exception type has been replaced with the special token “__HOLE__     * `label`: string, one of (`ValueError`, `KeyError`, `AttributeError`, `TypeError`, `OSError`, `IOError`, `ImportError`, `IndexError`, `DoesNotExist`, `KeyboardInterrupt`, `StopIteration`, `AssertionError`, `SystemExit`, `RuntimeError`, `HTTPError`, `UnicodeDecodeError`, `NotImplementedError`, `ValidationError`, `ObjectDoesNotExist`, `NameError`, `None`), the masked exception type. Note that `None` never occurs in the data and will be removed in a future release.
     * `info`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, and the fully-qualified function name.
1. **Variable-misuse classification**. Combinations of functions where one use of a variable may have been replaced with another variable defined in the same context, along with a label indicating if this bug-injection has occurred. The JSON fields are:
     * `function`: string, the source code of a function as text.
     * `label`: string, one of (“Correct”, “Variable misuse”) indicating if this is a buggy or bug-free example.
     * `info`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, the function, and whether the example is bugfree (marked “original”) or the variable substitution that has occurred (e.g., “correct_variable” → “incorrect_variable”).
1. **Swapped-operand classification**. Combinations of functions where one use binary operator’s arguments have been swapped, to create a buggy example, or left undisturbed, along with a label indicating if this bug-injection has occurred. The JSON fields are:
     * `function`: string, the source code of a function as text.
     * `label`: string, one of (“Correct”, “Swapped operands”) indicating if this is a buggy or bug-free example.
     * `info`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, the function, and whether the example is bugfree (marked “original”) or the operand swap has occurred (e.g., “swapped operands of `not in`”).
1. **Wrong-binary-operator classification**. Combinations of functions where one binary operator has been swapped with another, to create a buggy example, or left undisturbed, along with a label indicating if this bug-injection has occurred. The JSON fields are:
     * `function`: string, the source code of a function as text.
     * `label`: string, one of (“Correct”, “Wrong binary operator”) indicating if this is a buggy or bug-free example.
     * `info`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, the function, and whether the example is bugfree (marked “original”) or the operator replacement has occurred (e.g., “`==`-> `!=`”).
1. **Variable-misuse localization and repair**. Combinations of functions where one use of a variable may have been replaced with another variable defined in the same context, along with information that can be used to localize and repair the bug, as well as the location of the bug if such a bug exists. The JSON fields are:
     * `function`: a list of strings, the source code of a function, tokenized with the vocabulary from item b. Note that, unlike other task datasets, this dataset gives a tokenized function, rather than the code as a single string.
     * `target_mask`: a list of integers (0 or 1). If the integer at some position is 1, then the token at the corresponding position of the function token list is a correct repair for the introduced bug. If a variable has been split into multiple tokens, only the first subtoken is marked in this mask. If the example is bug-free, all integers are 0.
     * `error_location_mask`: a list of integers (0 or 1). If the integer at some position is 1, then there is a variable-misuse bug at the corresponding location of the tokenized function. In a bug-free example, the first integer is 1. There is exactly one integer set to 1 for all examples. If a variable has been split into multiple tokens, only the first subtoken is marked in this mask.
     * `candidate_mask`: a list of integers (0 or 1). If the integer at some position is 1, then the variable starting at that position in the tokenized function is a candidate to consider when repairing a bug. Candidates are all variables defined in the function parameters or via variable declarations in the function. If a variable has been split into multiple tokens, only the first subtoken is marked in this mask, for each candidate.
     * `provenance`: string, an unformatted description of how the example was constructed, including the source dataset (always “ETHPy150Open”), the repository and filepath, the function, and whether the example is bugfree (marked “original”) or the buggy/repair token positions and variables (e.g., “16/18 `kwargs``self`”). 16 is the position of the introduced error, 18 is the location of the repair.
  
## Citation
```bibtex
@inproceedings{cubert,
author    = {Aditya Kanade and
             Petros Maniatis and
             Gogul Balakrishnan and
             Kensen Shi},
title     = {Learning and evaluating contextual embedding of source code},
booktitle = {Proceedings of the 37th International Conference on Machine Learning,
               {ICML} 2020, 12-18 July 2020},
series    = {Proceedings of Machine Learning Research},
publisher = {{PMLR}},
year      = {2020},
}
```