parquet-converter commited on
Commit
6c83f87
1 Parent(s): 3dcf6eb

Update parquet files

Browse files
README.md DELETED
@@ -1,241 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - code
8
- license:
9
- - c-uda
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-generation
18
- - fill-mask
19
- task_ids:
20
- - language-modeling
21
- - masked-language-modeling
22
- pretty_name: CodeXGlueCcCodeCompletionToken
23
- dataset_info:
24
- - config_name: java
25
- features:
26
- - name: id
27
- dtype: int32
28
- - name: code
29
- sequence: string
30
- splits:
31
- - name: train
32
- num_bytes: 128312061
33
- num_examples: 12934
34
- - name: validation
35
- num_bytes: 30259174
36
- num_examples: 7189
37
- - name: test
38
- num_bytes: 43027956
39
- num_examples: 8268
40
- download_size: 126856519
41
- dataset_size: 201599191
42
- - config_name: python
43
- features:
44
- - name: id
45
- dtype: int32
46
- - name: path
47
- dtype: string
48
- - name: code
49
- sequence: string
50
- splits:
51
- - name: train
52
- num_bytes: 684319575
53
- num_examples: 100000
54
- - name: test
55
- num_bytes: 333978088
56
- num_examples: 50000
57
- download_size: 199067128
58
- dataset_size: 1018297663
59
- ---
60
- # Dataset Card for "code_x_glue_cc_code_completion_token"
61
-
62
- ## Table of Contents
63
- - [Dataset Description](#dataset-description)
64
- - [Dataset Summary](#dataset-summary)
65
- - [Supported Tasks and Leaderboards](#supported-tasks)
66
- - [Languages](#languages)
67
- - [Dataset Structure](#dataset-structure)
68
- - [Data Instances](#data-instances)
69
- - [Data Fields](#data-fields)
70
- - [Data Splits](#data-splits-sample-size)
71
- - [Dataset Creation](#dataset-creation)
72
- - [Curation Rationale](#curation-rationale)
73
- - [Source Data](#source-data)
74
- - [Annotations](#annotations)
75
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
76
- - [Considerations for Using the Data](#considerations-for-using-the-data)
77
- - [Social Impact of Dataset](#social-impact-of-dataset)
78
- - [Discussion of Biases](#discussion-of-biases)
79
- - [Other Known Limitations](#other-known-limitations)
80
- - [Additional Information](#additional-information)
81
- - [Dataset Curators](#dataset-curators)
82
- - [Licensing Information](#licensing-information)
83
- - [Citation Information](#citation-information)
84
- - [Contributions](#contributions)
85
-
86
- ## Dataset Description
87
-
88
- - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token
89
-
90
- ### Dataset Summary
91
-
92
- CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token
93
-
94
- Predict next code token given context of previous tokens. Models are evaluated by token level accuracy.
95
- Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
96
-
97
- ### Supported Tasks and Leaderboards
98
-
99
- - `language-modeling`: The dataset can be used to train a model for completing single code tokens.
100
-
101
- ### Languages
102
-
103
- - Java **programming** language
104
- - Python **programming** language
105
-
106
- ## Dataset Structure
107
-
108
- ### Data Instances
109
-
110
- #### java
111
-
112
- An example of 'test' looks as follows.
113
- ```
114
- {
115
- "code": ["<s>", "package", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "demo", ";", "import", "java", ".", "io", ".", "BufferedReader", ";", "import", "java", ".", "io", ".", "ByteArrayInputStream", ";", "import", "java", ".", "io", ".", "IOException", ";", "import", "java", ".", "io", ".", "InputStreamReader", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "Clara", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "inflater", ".", "LayoutInflaterException", ";", "import", "com", ".", "vaadin", ".", "Application", ";", "import", "com", ".", "vaadin", ".", "terminal", ".", "ThemeResource", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ".", "ClickEvent", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Component", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Embedded", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalSplitPanel", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "TextArea", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "VerticalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ".", "Notification", ";", "@", "SuppressWarnings", "(", "\"serial\"", ")", "public", "class", "DemoApplication", "extends", "Application", "{", "private", "DemoController", "controller", ";", "private", "TextArea", "xmlArea", ";", "private", "HorizontalSplitPanel", "split", "=", "new", "HorizontalSplitPanel", "(", ")", ";", "private", "Window", "mainWindow", ";", "@", "Override", "public", "void", "init", "(", ")", "{", "setTheme", "(", "\"clara\"", ")", ";", "setMainWindow", "(", "mainWindow", "=", "new", "Window", "(", ")", ")", ";", "controller", "=", "new", "DemoController", "(", "mainWindow", ")", ";", "mainWindow", ".", "setContent", "(", "split", ")", ";", "VerticalLayout", "editor", "=", "new", "VerticalLayout", "(", ")", ";", "editor", ".", "setSpacing", "(", "true", ")", ";", "editor", ".", "setMargin", "(", "false", ",", "false", ",", "false", ",", "true", ")", ";", "editor", ".", "setHeight", "(", "\"100%\"", ")", ";", "editor", ".", "addComponent", "(", "xmlArea", "=", "createXmlArea", "(", ")", ")", ";", "editor", ".", "setExpandRatio", "(", "xmlArea", ",", "1.0f", ")", ";", "editor", ".", "addComponent", "(", "createUpdateButton", "(", ")", ")", ";", "HorizontalLayout", "wrapper", "=", "new", "HorizontalLayout", "(", ")", ";", "wrapper", ".", "setMargin", "(", "true", ")", ";", "wrapper", ".", "setSizeFull", "(", ")", ";", "wrapper", ".", "addComponent", "(", "createLogo", "(", ")", ")", ";", "wrapper", ".", "addComponent", "(", "editor", ")", ";", "wrapper", ".", "setExpandRatio", "(", "editor", ",", "1.0f", ")", ";", "split", ".", "setFirstComponent", "(", "wrapper", ")", ";", "updateLayout", "(", ")", ";", "}", "private", "Component", "createLogo", "(", ")", "{", "Embedded", "logo", "=", "new", "Embedded", "(", "null", ",", "new", "ThemeResource", "(", "\"\"", ")", ")", ";", "logo", ".", "setHeight", "(", "\"90px\"", ")", ";", "logo", ".", "setWidth", "(", "\"90px\"", ")", ";", "return", "logo", ";", "}", "private", "TextArea", "createXmlArea", "(", ")", "{", "TextArea", "area", "=", "new", "TextArea", "(", ")", ";", "area", ".", "setStyleName", "(", "\"xml-area\"", ")", ";", "area", ".", "setSizeFull", "(", ")", ";", "area", ".", "setValue", "(", "readStartingPoint", "(", ")", ")", ";", "return", "area", ";", "}", "private", "Button", "createUpdateButton", "(", ")", "{", "return", "new", "Button", "(", "\"Update\"", ",", "new", "Button", ".", "ClickListener", "(", ")", "{", "public", "void", "buttonClick", "(", "ClickEvent", "event", ")", "{", "updateLayout", "(", ")", ";", "}", "}", ")", ";", "}", "private", "String", "readStartingPoint", "(", ")", "{", "BufferedReader", "reader", "=", "null", ";", "try", "{", "reader", "=", "new", "BufferedReader", "(", "new", "InputStreamReader", "(", "getClass", "(", ")", ".", "getClassLoader", "(", ")", ".", "getResourceAsStream", "(", "\"\"", ")", ")", ")", ";", "StringBuilder", "xml", "=", "new", "StringBuilder", "(", ")", ";", "String", "line", ";", "while", "(", "(", "line", "=", "reader", ".", "readLine", "(", ")", ")", "!=", "null", ")", "{", "xml", ".", "append", "(", "line", ")", ";", "xml", ".", "append", "(", "\"n\"", ")", ";", "}", "return", "xml", ".", "toString", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "finally", "{", "if", "(", "reader", "!=", "null", ")", "{", "try", "{", "reader", ".", "close", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "}", "}", "return", "null", ";", "}", "private", "void", "updateLayout", "(", ")", "{", "try", "{", "Component", "c", "=", "Clara", ".", "create", "(", "new", "ByteArrayInputStream", "(", "xmlArea", ".", "getValue", "(", ")", ".", "toString", "(", ")", ".", "getBytes", "(", ")", ")", ",", "controller", ")", ";", "split", ".", "replaceComponent", "(", "split", ".", "getSecondComponent", "(", ")", ",", "c", ")", ";", "}", "catch", "(", "LayoutInflaterException", "e", ")", "{", "mainWindow", ".", "showNotification", "(", "e", ".", "getMessage", "(", ")", ",", "Notification", ".", "TYPE_ERROR_MESSAGE", ")", ";", "}", "}", "}", "</s>"],
116
- "id": 0
117
- }
118
- ```
119
-
120
- #### python
121
-
122
- An example of 'train' looks as follows.
123
- ```
124
- {
125
- "code": ["<s>", "from", "bootstrap", "import", "Bootstrap", "<EOL>", "from", "fund", "import", "InstantPaymentNotificationHandler", "<EOL>", "from", "fund", "import", "ThankYouHandler", "<EOL>", "from", "view", "import", "*", "<EOL>", "mapping", "=", "[", "(", "<EOL>", "r'/'", ",", "<EOL>", "Index", "<EOL>", ")", ",", "(", "<EOL>", "r'/ipn'", ",", "<EOL>", "InstantPaymentNotificationHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/thank-you'", ",", "<EOL>", "ThankYouHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/about\\/?'", ",", "<EOL>", "About", "<EOL>", ")", ",", "(", "<EOL>", "r'/guide\\/?'", ",", "<EOL>", "Guide", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Download", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Standards", "<EOL>", ")", ",", "(", "<EOL>", "r'/community\\/?'", ",", "<EOL>", "Community", "<EOL>", ")", ",", "(", "<EOL>", "r'/news\\/?'", ",", "<EOL>", "News", "<EOL>", ")", ",", "(", "<EOL>", "r'/support\\/?'", ",", "<EOL>", "Support", "<EOL>", ")", ",", "(", "<EOL>", "r'/contact\\/?'", ",", "<EOL>", "Contact", "<EOL>", ")", ",", "(", "<EOL>", "r'/press\\/?'", ",", "<EOL>", "Press", "<EOL>", ")", ",", "(", "<EOL>", "r'/legal/terms'", ",", "<EOL>", "Terms", "<EOL>", ")", ",", "(", "<EOL>", "r'/library\\/?'", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Users", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "User", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectSuccess", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectError", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectAfterDelete", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Moderate", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Bootstrap", "<EOL>", ")", ",", "(", "<EOL>", "r'/activity'", ",", "<EOL>", "ActivityScreen", "<EOL>", ")", ",", "(", "<EOL>", "r'/txns'", ",", "<EOL>", "TxnList", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "MessageStrings", "<EOL>", ")", ",", "(", "<EOL>", "r'/.*'", ",", "<EOL>", "NotFound", "<EOL>", ")", "<EOL>", "]", "</s>"],
126
- "id": 0,
127
- "path": "00/wikihouse/urls.py\n"
128
- }
129
- ```
130
-
131
- ### Data Fields
132
-
133
- In the following each data field in go is explained for each config. The data fields are the same among all splits.
134
-
135
- #### java
136
-
137
- |field name| type | description |
138
- |----------|----------------|--------------------|
139
- |id |int32 | Index of the sample|
140
- |code |Sequence[string]| Code Tokens |
141
-
142
- #### python
143
-
144
- |field name| type | description |
145
- |----------|----------------|-----------------------------|
146
- |id |int32 | Index of the sample |
147
- |path |string | Original path in the dataset|
148
- |code |Sequence[string]| Code Tokens |
149
-
150
- ### Data Splits
151
-
152
- #### java
153
-
154
- | |train|validation|test|
155
- |----|----:|---------:|---:|
156
- |java|12934| 7189|8268|
157
-
158
- #### python
159
-
160
- | |train |test |
161
- |------|-----:|----:|
162
- |python|100000|50000|
163
-
164
- ## Dataset Creation
165
-
166
- ### Curation Rationale
167
-
168
- [More Information Needed]
169
-
170
- ### Source Data
171
-
172
- #### Initial Data Collection and Normalization
173
-
174
- [More Information Needed]
175
-
176
- #### Who are the source language producers?
177
-
178
- [More Information Needed]
179
-
180
- ### Annotations
181
-
182
- #### Annotation process
183
-
184
- [More Information Needed]
185
-
186
- #### Who are the annotators?
187
-
188
- [More Information Needed]
189
-
190
- ### Personal and Sensitive Information
191
-
192
- [More Information Needed]
193
-
194
- ## Considerations for Using the Data
195
-
196
- ### Social Impact of Dataset
197
-
198
- [More Information Needed]
199
-
200
- ### Discussion of Biases
201
-
202
- [More Information Needed]
203
-
204
- ### Other Known Limitations
205
-
206
- [More Information Needed]
207
-
208
- ## Additional Information
209
-
210
- ### Dataset Curators
211
-
212
- https://github.com/microsoft, https://github.com/madlag
213
-
214
- ### Licensing Information
215
-
216
- Computational Use of Data Agreement (C-UDA) License.
217
-
218
- ### Citation Information
219
-
220
- ```
221
- @article{raychev2016probabilistic,
222
- title={Probabilistic Model for Code with Decision Trees},
223
- author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
224
- journal={ACM SIGPLAN Notices},
225
- pages={731--747},
226
- year={2016},
227
- publisher={ACM New York, NY, USA}
228
- }
229
- @inproceedings{allamanis2013mining,
230
- title={Mining Source Code Repositories at Massive Scale using Language Modeling},
231
- author={Allamanis, Miltiadis and Sutton, Charles},
232
- booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
233
- pages={207--216},
234
- year={2013},
235
- organization={IEEE}
236
- }
237
- ```
238
-
239
- ### Contributions
240
-
241
- Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
code_x_glue_cc_code_completion_token.py DELETED
@@ -1,216 +0,0 @@
1
- import os
2
- import os.path
3
- from typing import List
4
-
5
- import datasets
6
-
7
- from .common import Child
8
- from .generated_definitions import DEFINITIONS
9
-
10
-
11
- _DESCRIPTION = """Predict next code token given context of previous tokens. Models are evaluated by token level accuracy.
12
- Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
13
- """
14
-
15
- _CITATION = """@article{raychev2016probabilistic,
16
- title={Probabilistic Model for Code with Decision Trees},
17
- author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
18
- journal={ACM SIGPLAN Notices},
19
- pages={731--747},
20
- year={2016},
21
- publisher={ACM New York, NY, USA}
22
- }
23
- @inproceedings{allamanis2013mining,
24
- title={Mining Source Code Repositories at Massive Scale using Language Modeling},
25
- author={Allamanis, Miltiadis and Sutton, Charles},
26
- booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
27
- pages={207--216},
28
- year={2013},
29
- organization={IEEE}
30
- }"""
31
-
32
-
33
- class CodeXGlueCcCodeCompletionTokenImpl(Child):
34
- _DESCRIPTION = _DESCRIPTION
35
- _CITATION = _CITATION
36
-
37
-
38
- class CodeXGlueCcCodeCompletionTokenJavaImpl(CodeXGlueCcCodeCompletionTokenImpl):
39
- SPLITS = {
40
- "training": datasets.Split.TRAIN,
41
- "validation": datasets.Split.VALIDATION,
42
- "test": datasets.Split.TEST,
43
- }
44
-
45
- _FEATURES = {
46
- "id": datasets.Value("int32"), # Index of the sample
47
- "code": datasets.features.Sequence(datasets.Value("string")), # Code Tokens
48
- }
49
-
50
- def generate_urls(self, split_name):
51
- language = self.info["parameters"]["language"]
52
- if language != "java":
53
- raise RuntimeError(f"Unknown language {language}: should be java.")
54
-
55
- yield "data", f"https://zenodo.org/record/3628665/files/java_{split_name}_pre"
56
-
57
- def _generate_examples(self, split_name, file_paths):
58
- with open(file_paths["data"], encoding="utf-8") as f:
59
- for idx, line in enumerate(f):
60
- new_data = []
61
- for token in line.strip().split():
62
- if len(token) > 100:
63
- continue
64
- new_data.append(token)
65
- entry = dict(id=idx, code=new_data)
66
- yield idx, entry
67
-
68
-
69
- class CodeXGlueCcCodeCompletionTokenPythonImpl(CodeXGlueCcCodeCompletionTokenImpl):
70
- SPLITS = {"train": datasets.Split.TRAIN, "test": datasets.Split.TEST}
71
-
72
- _FEATURES = {
73
- "id": datasets.Value("int32"), # Index of the sample
74
- "path": datasets.Value("string"), # Original path in the dataset
75
- "code": datasets.features.Sequence(datasets.Value("string")), # Code Tokens
76
- }
77
-
78
- PYTHON_FILE_MAPPING = dict(train="python100k_train.txt", test="python50k_eval.txt")
79
-
80
- def generate_urls(self, split_name):
81
- language = self.info["parameters"]["language"]
82
- if language != "python":
83
- raise RuntimeError(f"Unknown language {language}")
84
-
85
- yield "data", "http://files.srl.inf.ethz.ch/data/py150_files.tar.gz"
86
-
87
- def process_string(self, token):
88
- # Copyright (c) Microsoft Corporation.
89
- # Licensed under the MIT License.
90
- import re
91
-
92
- str_quote_options = ["'''", '"""', "'", '"']
93
- start_quote = ""
94
- end_quote = ""
95
- qualifier_regex = r"^[a-z]+"
96
- qualifier_match = re.search(qualifier_regex, token)
97
- # string qualifiers like 'r' for regex, 'f' for formatted string, 'b' for bytes, 'u' for unicode, etc (or combination of them)
98
- qualifier = "" if not qualifier_match else qualifier_match[0]
99
- # token string without qualifiers
100
- token_string = re.sub(qualifier_regex, "", token)
101
- # string literal without quotes
102
- str_lit = token_string
103
- for q in str_quote_options:
104
- if token_string.startswith(q):
105
- start_quote = q
106
- str_lit = str_lit[len(q) :]
107
- if token_string.endswith(q):
108
- end_quote = q
109
- str_lit = str_lit[: -len(q)]
110
- break
111
- if start_quote in str_quote_options[:2]:
112
- return ""
113
- return (
114
- f"{qualifier}{start_quote}{str_lit}{end_quote}"
115
- if len(str_lit) < 15
116
- and "\n" not in str_lit
117
- and "</s>" not in str_lit
118
- and "<s>" not in str_lit
119
- and "<pad>" not in str_lit
120
- and "<EOL>" not in str_lit
121
- else f"{qualifier}{start_quote}{end_quote}"
122
- )
123
-
124
- def py_tokenize(self, base_dir, file_name):
125
- # Copyright (c) Microsoft Corporation.
126
- # Licensed under the MIT License.
127
- from io import BytesIO
128
- from tokenize import COMMENT, ENCODING, ENDMARKER, INDENT, NEWLINE, NL, NUMBER, STRING, tokenize
129
-
130
- file_paths = open(os.path.join(base_dir, file_name), encoding="utf-8").readlines()
131
- for ct, path in enumerate(file_paths):
132
- try:
133
- code = open(os.path.join(base_dir, path.strip()), encoding="utf-8").read()
134
- token_gen = tokenize(BytesIO(bytes(code, "utf8")).readline)
135
- out_tokens = []
136
- prev_eol = False
137
- for toknum, tokval, _, _, _ in token_gen:
138
- tokval = " ".join(tokval.split())
139
- if len(tokval) > 100:
140
- continue
141
- if toknum == STRING:
142
- add_token = self.process_string(tokval)
143
- if len(add_token) > 0:
144
- out_tokens.append(add_token)
145
- prev_eol = False
146
- elif toknum == NUMBER:
147
- if len(tokval) < 50:
148
- out_tokens.append(tokval)
149
- prev_eol = False
150
- elif toknum in [NEWLINE, NL]:
151
- if not prev_eol:
152
- out_tokens.append("<EOL>")
153
- prev_eol = True
154
- elif toknum in [COMMENT, INDENT, ENCODING, ENDMARKER] or len(tokval) == 0:
155
- continue
156
- else:
157
- out_tokens.append(tokval)
158
- prev_eol = False
159
- if out_tokens[0] == "<EOL>":
160
- out_tokens = out_tokens[1:]
161
- if out_tokens[-1] == "<EOL>":
162
- out_tokens = out_tokens[:-1]
163
- except Exception:
164
- out_tokens = []
165
- out_tokens = ["<s>"] + out_tokens + ["</s>"]
166
- yield path, out_tokens
167
-
168
- def _generate_examples(self, split_name, file_paths):
169
- base_dir = file_paths["data"]
170
- filename = self.PYTHON_FILE_MAPPING[split_name]
171
-
172
- data_dir = os.path.join(base_dir, "data")
173
- if not os.path.exists(data_dir):
174
- import gzip
175
- import tarfile
176
-
177
- gzip_filename = os.path.join(base_dir, "data.tar.gz")
178
- with gzip.open(gzip_filename, "rb") as gzip_file:
179
- t = tarfile.TarFile(fileobj=gzip_file)
180
- t.extractall(path=base_dir)
181
-
182
- idx = 0
183
- for entry in self.py_tokenize(base_dir=base_dir, file_name=filename):
184
- path, out_tokens = entry
185
- path = path[len("data/") :]
186
- yield idx, dict(id=idx, path=path, code=out_tokens)
187
- idx += 1
188
-
189
-
190
- CLASS_MAPPING = {
191
- "CodeXGlueCcCodeCompletionTokenJava": CodeXGlueCcCodeCompletionTokenJavaImpl,
192
- "CodeXGlueCcCodeCompletionTokenPython": CodeXGlueCcCodeCompletionTokenPythonImpl,
193
- }
194
-
195
-
196
- class CodeXGlueCcCodeCompletionToken(datasets.GeneratorBasedBuilder):
197
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
198
- BUILDER_CONFIGS = [
199
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
200
- ]
201
-
202
- def _info(self):
203
- name = self.config.name
204
- info = DEFINITIONS[name]
205
- if info["class_name"] in CLASS_MAPPING:
206
- self.child = CLASS_MAPPING[info["class_name"]](info)
207
- else:
208
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
209
- ret = self.child._info()
210
- return ret
211
-
212
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
213
- return self.child._split_generators(dl_manager=dl_manager)
214
-
215
- def _generate_examples(self, split_name, file_paths):
216
- return self.child._generate_examples(split_name, file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download_and_extract(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"java": {"description": "CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token\n\nPredict next code token given context of previous tokens. Models are evaluated by token level accuracy.\nCode completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.\n", "citation": "@article{raychev2016probabilistic,\n title={Probabilistic Model for Code with Decision Trees},\n author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\n journal={ACM SIGPLAN Notices},\n pages={731--747},\n year={2016},\n publisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\n title={Mining Source Code Repositories at Massive Scale using Language Modeling},\n author={Allamanis, Miltiadis and Sutton, Charles},\n booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\n pages={207--216},\n year={2013},\n organization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "code": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_token", "config_name": "java", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 128312061, "num_examples": 12934, "dataset_name": "code_x_glue_cc_code_completion_token"}, "validation": {"name": "validation", "num_bytes": 30259174, "num_examples": 7189, "dataset_name": "code_x_glue_cc_code_completion_token"}, "test": {"name": "test", "num_bytes": 43027956, "num_examples": 8268, "dataset_name": "code_x_glue_cc_code_completion_token"}}, "download_checksums": {"https://zenodo.org/record/3628665/files/java_training_pre": {"num_bytes": 81051708, "checksum": "676295d2756adcac22e213fbc3ea0f50669a0d152e9497e23a2929e2e2124905"}, "https://zenodo.org/record/3628665/files/java_validation_pre": {"num_bytes": 18835141, "checksum": "0c58a97d96aa7435396581ee5efbb93c0e74a545ba5f795f878098a6e59ab8b3"}, "https://zenodo.org/record/3628665/files/java_test_pre": {"num_bytes": 26969670, "checksum": "a88cd5c91c2ed23a928528bef3535f4fc8db1359975447211f2b13926cc38d9d"}}, "download_size": 126856519, "post_processing_size": null, "dataset_size": 201599191, "size_in_bytes": 328455710}, "python": {"description": "CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token\n\nPredict next code token given context of previous tokens. Models are evaluated by token level accuracy.\nCode completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.\n", "citation": "@article{raychev2016probabilistic,\n title={Probabilistic Model for Code with Decision Trees},\n author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\n journal={ACM SIGPLAN Notices},\n pages={731--747},\n year={2016},\n publisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\n title={Mining Source Code Repositories at Massive Scale using Language Modeling},\n author={Allamanis, Miltiadis and Sutton, Charles},\n booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\n pages={207--216},\n year={2013},\n organization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_token", "config_name": "python", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 684319575, "num_examples": 100000, "dataset_name": "code_x_glue_cc_code_completion_token"}, "test": {"name": "test", "num_bytes": 333978088, "num_examples": 50000, "dataset_name": "code_x_glue_cc_code_completion_token"}}, "download_checksums": {"http://files.srl.inf.ethz.ch/data/py150_files.tar.gz": {"num_bytes": 199067128, "checksum": "73be7f7a78e549845cf80cf779a3bcc3a9cf351ff7017e07b89e8d1c82b8d389"}}, "download_size": 199067128, "post_processing_size": null, "dataset_size": 1018297663, "size_in_bytes": 1217364791}}
 
 
generated_definitions.py DELETED
@@ -1,24 +0,0 @@
1
- DEFINITIONS = {
2
- "java": {
3
- "class_name": "CodeXGlueCcCodeCompletionTokenJava",
4
- "dataset_type": "Code-Code",
5
- "description": "CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token",
6
- "dir_name": "CodeCompletion-token",
7
- "name": "java",
8
- "parameters": {"language": "java", "original_language_name": "javaCorpus"},
9
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token",
10
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-token/dataset/javaCorpus",
11
- "sizes": {"test": 8268, "train": 12934, "validation": 7189},
12
- },
13
- "python": {
14
- "class_name": "CodeXGlueCcCodeCompletionTokenPython",
15
- "dataset_type": "Code-Code",
16
- "description": "CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token",
17
- "dir_name": "CodeCompletion-token",
18
- "name": "python",
19
- "parameters": {"language": "python", "original_language_name": "py150"},
20
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token",
21
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-token/dataset/py150",
22
- "sizes": {"test": 50000, "train": 100000},
23
- },
24
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
java/code_x_glue_cc_code_completion_token-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47ce850847e3dbe0ab9984950d48a7d0cee8b437cb16e0d6d07807309a384900
3
+ size 6543102
java/code_x_glue_cc_code_completion_token-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b2a03b75a24d51721cfd4414fd69de027791595ffb5fc3dae86f57b3b5b83f
3
+ size 19617198
java/code_x_glue_cc_code_completion_token-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:239ecd0a7db218b352aa03724cc51065437a493cf8ced6939702370ae058683c
3
+ size 5160036
python/code_x_glue_cc_code_completion_token-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cda88487f319a193ad9eb0078d65a0fcd905c361f05ddf89a75eb4222bfa5c6
3
+ size 69199820
python/code_x_glue_cc_code_completion_token-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cb29c3005af01a3f599796494a82aed3aaeaa295577f99c4b755448bb62d956
3
+ size 103864714
python/code_x_glue_cc_code_completion_token-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5e8f43f8e9fb9688fbacf94ffc37d7b46c2dc650699a0065c89077080134794
3
+ size 37078998