parquet-converter commited on
Commit
6100e3c
1 Parent(s): fab4a96

Update parquet files

Browse files
README.md DELETED
@@ -1,256 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - code
8
- - en
9
- license:
10
- - c-uda
11
- multilinguality:
12
- - other-programming-languages
13
- size_categories:
14
- - 100K<n<1M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-retrieval
19
- task_ids:
20
- - document-retrieval
21
- pretty_name: CodeXGlueTcNlCodeSearchAdv
22
- dataset_info:
23
- features:
24
- - name: id
25
- dtype: int32
26
- - name: repo
27
- dtype: string
28
- - name: path
29
- dtype: string
30
- - name: func_name
31
- dtype: string
32
- - name: original_string
33
- dtype: string
34
- - name: language
35
- dtype: string
36
- - name: code
37
- dtype: string
38
- - name: code_tokens
39
- sequence: string
40
- - name: docstring
41
- dtype: string
42
- - name: docstring_tokens
43
- sequence: string
44
- - name: sha
45
- dtype: string
46
- - name: url
47
- dtype: string
48
- - name: docstring_summary
49
- dtype: string
50
- - name: parameters
51
- dtype: string
52
- - name: return_statement
53
- dtype: string
54
- - name: argument_list
55
- dtype: string
56
- - name: identifier
57
- dtype: string
58
- - name: nwo
59
- dtype: string
60
- - name: score
61
- dtype: float32
62
- splits:
63
- - name: train
64
- num_bytes: 820716084
65
- num_examples: 251820
66
- - name: validation
67
- num_bytes: 23468834
68
- num_examples: 9604
69
- - name: test
70
- num_bytes: 47433760
71
- num_examples: 19210
72
- download_size: 966025624
73
- dataset_size: 891618678
74
- ---
75
- # Dataset Card for "code_x_glue_tc_nl_code_search_adv"
76
-
77
- ## Table of Contents
78
- - [Dataset Description](#dataset-description)
79
- - [Dataset Summary](#dataset-summary)
80
- - [Supported Tasks and Leaderboards](#supported-tasks)
81
- - [Languages](#languages)
82
- - [Dataset Structure](#dataset-structure)
83
- - [Data Instances](#data-instances)
84
- - [Data Fields](#data-fields)
85
- - [Data Splits](#data-splits-sample-size)
86
- - [Dataset Creation](#dataset-creation)
87
- - [Curation Rationale](#curation-rationale)
88
- - [Source Data](#source-data)
89
- - [Annotations](#annotations)
90
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
91
- - [Considerations for Using the Data](#considerations-for-using-the-data)
92
- - [Social Impact of Dataset](#social-impact-of-dataset)
93
- - [Discussion of Biases](#discussion-of-biases)
94
- - [Other Known Limitations](#other-known-limitations)
95
- - [Additional Information](#additional-information)
96
- - [Dataset Curators](#dataset-curators)
97
- - [Licensing Information](#licensing-information)
98
- - [Citation Information](#citation-information)
99
- - [Contributions](#contributions)
100
-
101
- ## Dataset Description
102
-
103
- - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv
104
-
105
- ### Dataset Summary
106
-
107
- CodeXGLUE NL-code-search-Adv dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv
108
-
109
- The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
110
- - Remove examples that codes cannot be parsed into an abstract syntax tree.
111
- - Remove examples that #tokens of documents is < 3 or >256
112
- - Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
113
- - Remove examples that documents are not English.
114
-
115
- ### Supported Tasks and Leaderboards
116
-
117
- - `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes from a given **English** natural language query.
118
-
119
- ### Languages
120
-
121
- - Python **programming** language
122
- - English **natural** language
123
-
124
- ## Dataset Structure
125
-
126
- ### Data Instances
127
-
128
- An example of 'validation' looks as follows.
129
- ```
130
- {
131
- "argument_list": "",
132
- "code": "def Func(arg_0, arg_1='.', arg_2=True, arg_3=False, **arg_4):\n \"\"\"Downloads Dailymotion videos by URL.\n \"\"\"\n\n arg_5 = get_content(rebuilt_url(arg_0))\n arg_6 = json.loads(match1(arg_5, r'qualities\":({.+?}),\"'))\n arg_7 = match1(arg_5, r'\"video_title\"\\s*:\\s*\"([^\"]+)\"') or \\\n match1(arg_5, r'\"title\"\\s*:\\s*\"([^\"]+)\"')\n arg_7 = unicodize(arg_7)\n\n for arg_8 in ['1080','720','480','380','240','144','auto']:\n try:\n arg_9 = arg_6[arg_8][1][\"url\"]\n if arg_9:\n break\n except KeyError:\n pass\n\n arg_10, arg_11, arg_12 = url_info(arg_9)\n\n print_info(site_info, arg_7, arg_10, arg_12)\n if not arg_3:\n download_urls([arg_9], arg_7, arg_11, arg_12, arg_1=arg_1, arg_2=arg_2)",
133
- "code_tokens": ["def", "Func", "(", "arg_0", ",", "arg_1", "=", "'.'", ",", "arg_2", "=", "True", ",", "arg_3", "=", "False", ",", "**", "arg_4", ")", ":", "arg_5", "=", "get_content", "(", "rebuilt_url", "(", "arg_0", ")", ")", "arg_6", "=", "json", ".", "loads", "(", "match1", "(", "arg_5", ",", "r'qualities\":({.+?}),\"'", ")", ")", "arg_7", "=", "match1", "(", "arg_5", ",", "r'\"video_title\"\\s*:\\s*\"([^\"]+)\"'", ")", "or", "match1", "(", "arg_5", ",", "r'\"title\"\\s*:\\s*\"([^\"]+)\"'", ")", "arg_7", "=", "unicodize", "(", "arg_7", ")", "for", "arg_8", "in", "[", "'1080'", ",", "'720'", ",", "'480'", ",", "'380'", ",", "'240'", ",", "'144'", ",", "'auto'", "]", ":", "try", ":", "arg_9", "=", "arg_6", "[", "arg_8", "]", "[", "1", "]", "[", "\"url\"", "]", "if", "arg_9", ":", "break", "except", "KeyError", ":", "pass", "arg_10", ",", "arg_11", ",", "arg_12", "=", "url_info", "(", "arg_9", ")", "print_info", "(", "site_info", ",", "arg_7", ",", "arg_10", ",", "arg_12", ")", "if", "not", "arg_3", ":", "download_urls", "(", "[", "arg_9", "]", ",", "arg_7", ",", "arg_11", ",", "arg_12", ",", "arg_1", "=", "arg_1", ",", "arg_2", "=", "arg_2", ")"],
134
- "docstring": "Downloads Dailymotion videos by URL.",
135
- "docstring_summary": "Downloads Dailymotion videos by URL.",
136
- "docstring_tokens": ["Downloads", "Dailymotion", "videos", "by", "URL", "."],
137
- "func_name": "",
138
- "id": 0,
139
- "identifier": "dailymotion_download",
140
- "language": "python",
141
- "nwo": "soimort/you-get",
142
- "original_string": "",
143
- "parameters": "(url, output_dir='.', merge=True, info_only=False, **kwargs)",
144
- "path": "src/you_get/extractors/dailymotion.py",
145
- "repo": "",
146
- "return_statement": "",
147
- "score": 0.9997601509094238,
148
- "sha": "b746ac01c9f39de94cac2d56f665285b0523b974",
149
- "url": "https://github.com/soimort/you-get/blob/b746ac01c9f39de94cac2d56f665285b0523b974/src/you_get/extractors/dailymotion.py#L13-L35"
150
- }
151
- ```
152
-
153
- ### Data Fields
154
-
155
- In the following each data field in go is explained for each config. The data fields are the same among all splits.
156
-
157
- #### default
158
-
159
- | field name | type | description |
160
- |-----------------|-----------------------|-----------------------------------------------------------------------------------|
161
- |id |int32 | Index of the sample |
162
- |repo |string | repo: the owner/repo |
163
- |path |string | path: the full path to the original file |
164
- |func_name |string | func_name: the function or method name |
165
- |original_string |string | original_string: the raw string before tokenization or parsing |
166
- |language |string | language: the programming language |
167
- |code |string | code/function: the part of the original_string that is code |
168
- |code_tokens |Sequence[string] | code_tokens/function_tokens: tokenized version of code |
169
- |docstring |string | docstring: the top-level comment or docstring, if it exists in the original string|
170
- |docstring_tokens |Sequence[string] | docstring_tokens: tokenized version of docstring |
171
- |sha |string | sha of the file |
172
- |url |string | url of the file |
173
- |docstring_summary|string | Summary of the docstring |
174
- |parameters |string | parameters of the function |
175
- |return_statement |string | return statement |
176
- |argument_list |string | list of arguments of the function |
177
- |identifier |string | identifier |
178
- |nwo |string | nwo |
179
- |score |datasets.Value("float"]| score for this search |
180
-
181
- ### Data Splits
182
-
183
- | name |train |validation|test |
184
- |-------|-----:|---------:|----:|
185
- |default|251820| 9604|19210|
186
-
187
- ## Dataset Creation
188
-
189
- ### Curation Rationale
190
-
191
- [More Information Needed]
192
-
193
- ### Source Data
194
-
195
-
196
- #### Initial Data Collection and Normalization
197
-
198
- Data from CodeSearchNet Challenge dataset.
199
- [More Information Needed]
200
-
201
- #### Who are the source language producers?
202
-
203
- Software Engineering developers.
204
-
205
- ### Annotations
206
-
207
- #### Annotation process
208
-
209
- [More Information Needed]
210
-
211
- #### Who are the annotators?
212
-
213
- [More Information Needed]
214
-
215
- ### Personal and Sensitive Information
216
-
217
- [More Information Needed]
218
-
219
- ## Considerations for Using the Data
220
-
221
- ### Social Impact of Dataset
222
-
223
- [More Information Needed]
224
-
225
- ### Discussion of Biases
226
-
227
- [More Information Needed]
228
-
229
- ### Other Known Limitations
230
-
231
- [More Information Needed]
232
-
233
- ## Additional Information
234
-
235
- ### Dataset Curators
236
-
237
- https://github.com/microsoft, https://github.com/madlag
238
-
239
- ### Licensing Information
240
-
241
- Computational Use of Data Agreement (C-UDA) License.
242
-
243
- ### Citation Information
244
-
245
- ```
246
- @article{husain2019codesearchnet,
247
- title={Codesearchnet challenge: Evaluating the state of semantic code search},
248
- author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
249
- journal={arXiv preprint arXiv:1909.09436},
250
- year={2019}
251
- }
252
- ```
253
-
254
- ### Contributions
255
-
256
- Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
code_x_glue_tc_nl_code_search_adv.py DELETED
@@ -1,206 +0,0 @@
1
- import json
2
- import os
3
- import os.path
4
- from typing import List
5
-
6
- import datasets
7
-
8
- from .common import TrainValidTestChild
9
- from .generated_definitions import DEFINITIONS
10
-
11
-
12
- _DESCRIPTION = """The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
13
- - Remove examples that codes cannot be parsed into an abstract syntax tree.
14
- - Remove examples that #tokens of documents is < 3 or >256
15
- - Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
16
- - Remove examples that documents are not English.
17
- """
18
- _CITATION = """@article{husain2019codesearchnet,
19
- title={Codesearchnet challenge: Evaluating the state of semantic code search},
20
- author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
21
- journal={arXiv preprint arXiv:1909.09436},
22
- year={2019}
23
- }"""
24
-
25
-
26
- class CodeXGlueCtCodeToTextBaseImpl(TrainValidTestChild):
27
- _DESCRIPTION = _DESCRIPTION
28
- _CITATION = _CITATION
29
-
30
- # For each file, each line in the uncompressed file represents one function.
31
- _FEATURES = {
32
- "id": datasets.Value("int32"), # Index of the sample
33
- "repo": datasets.Value("string"), # repo: the owner/repo
34
- "path": datasets.Value("string"), # path: the full path to the original file
35
- "func_name": datasets.Value("string"), # func_name: the function or method name
36
- "original_string": datasets.Value("string"), # original_string: the raw string before tokenization or parsing
37
- "language": datasets.Value("string"), # language: the programming language name
38
- "code": datasets.Value("string"), # code/function: the part of the original_string that is code
39
- "code_tokens": datasets.features.Sequence(
40
- datasets.Value("string")
41
- ), # code_tokens/function_tokens: tokenized version of code
42
- "docstring": datasets.Value(
43
- "string"
44
- ), # docstring: the top-level comment or docstring, if it exists in the original string
45
- "docstring_tokens": datasets.features.Sequence(
46
- datasets.Value("string")
47
- ), # docstring_tokens: tokenized version of docstring
48
- "sha": datasets.Value("string"), # sha of the file
49
- "url": datasets.Value("string"), # url of the file
50
- }
51
-
52
- _SUPERVISED_KEYS = ["docstring", "docstring_tokens"]
53
-
54
- def generate_urls(self, split_name, language):
55
- yield "language", f"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/{language}.zip"
56
- yield "dataset", "dataset.zip"
57
-
58
- def get_data_files(self, split_name, file_paths, language):
59
- language_specific_path = file_paths["language"]
60
- final_path = os.path.join(language_specific_path, language, "final")
61
- # Make some cleanup to save space
62
- for path in os.listdir(final_path):
63
- if path.endswith(".pkl"):
64
- os.unlink(path)
65
-
66
- data_files = []
67
- for root, dirs, files in os.walk(final_path):
68
- for file in files:
69
- temp = os.path.join(root, file)
70
- if ".jsonl" in temp:
71
- if split_name in temp:
72
- data_files.append(temp)
73
- return data_files
74
-
75
- def post_process(self, split_name, language, js):
76
- return js
77
-
78
- def _generate_examples(self, split_name, file_paths, language):
79
- import gzip
80
-
81
- data_set_path = file_paths["dataset"]
82
-
83
- data_files = self.get_data_files(split_name, file_paths, language)
84
-
85
- urls = {}
86
- f1_path_parts = [data_set_path, "dataset", language, f"{split_name}.txt"]
87
- if self.SINGLE_LANGUAGE:
88
- del f1_path_parts[2]
89
-
90
- f1_path = os.path.join(*f1_path_parts)
91
- with open(f1_path, encoding="utf-8") as f1:
92
- for line in f1:
93
- line = line.strip()
94
- urls[line] = True
95
-
96
- idx = 0
97
- for file in data_files:
98
- if ".gz" in file:
99
- f = gzip.open(file)
100
- else:
101
- f = open(file, encoding="utf-8")
102
-
103
- for line in f:
104
- line = line.strip()
105
- js = json.loads(line)
106
- if js["url"] in urls:
107
- js["id"] = idx
108
- js = self.post_process(split_name, language, js)
109
- if "partition" in js:
110
- del js["partition"]
111
- yield idx, js
112
- idx += 1
113
- f.close()
114
-
115
-
116
- class CodeXGlueTcNLCodeSearchAdvImpl(CodeXGlueCtCodeToTextBaseImpl):
117
- LANGUAGE = "python"
118
- SINGLE_LANGUAGE = True
119
-
120
- _FEATURES = {
121
- "id": datasets.Value("int32"), # Index of the sample
122
- "repo": datasets.Value("string"), # repo: the owner/repo
123
- "path": datasets.Value("string"), # path: the full path to the original file
124
- "func_name": datasets.Value("string"), # func_name: the function or method name
125
- "original_string": datasets.Value("string"), # original_string: the raw string before tokenization or parsing
126
- "language": datasets.Value("string"), # language: the programming language
127
- "code": datasets.Value("string"), # code/function: the part of the original_string that is code
128
- "code_tokens": datasets.features.Sequence(
129
- datasets.Value("string")
130
- ), # code_tokens/function_tokens: tokenized version of code
131
- "docstring": datasets.Value(
132
- "string"
133
- ), # docstring: the top-level comment or docstring, if it exists in the original string
134
- "docstring_tokens": datasets.features.Sequence(
135
- datasets.Value("string")
136
- ), # docstring_tokens: tokenized version of docstring
137
- "sha": datasets.Value("string"), # sha of the file
138
- "url": datasets.Value("string"), # url of the file
139
- "docstring_summary": datasets.Value("string"), # Summary of the docstring
140
- "parameters": datasets.Value("string"), # parameters of the function
141
- "return_statement": datasets.Value("string"), # return statement
142
- "argument_list": datasets.Value("string"), # list of arguments of the function
143
- "identifier": datasets.Value("string"), # identifier
144
- "nwo": datasets.Value("string"), # nwo
145
- "score": datasets.Value("float"), # score for this search
146
- }
147
-
148
- def post_process(self, split_name, language, js):
149
- for suffix in "_tokens", "":
150
- key = "function" + suffix
151
- if key in js:
152
- js["code" + suffix] = js[key]
153
- del js[key]
154
-
155
- for key in self._FEATURES:
156
- if key not in js:
157
- if key == "score":
158
- js[key] = -1
159
- else:
160
- js[key] = ""
161
-
162
- return js
163
-
164
- def generate_urls(self, split_name):
165
- for e in super().generate_urls(split_name, self.LANGUAGE):
166
- yield e
167
-
168
- def get_data_files(self, split_name, file_paths, language):
169
- if split_name == "train":
170
- return super().get_data_files(split_name, file_paths, language)
171
- else:
172
- data_set_path = file_paths["dataset"]
173
- data_file = os.path.join(data_set_path, "dataset", "test_code.jsonl")
174
- return [data_file]
175
-
176
- def _generate_examples(self, split_name, file_paths):
177
- for e in super()._generate_examples(split_name, file_paths, self.LANGUAGE):
178
- yield e
179
-
180
-
181
- CLASS_MAPPING = {
182
- "CodeXGlueTcNLCodeSearchAdv": CodeXGlueTcNLCodeSearchAdvImpl,
183
- }
184
-
185
-
186
- class CodeXGlueTcNlCodeSearchAdv(datasets.GeneratorBasedBuilder):
187
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
188
- BUILDER_CONFIGS = [
189
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
190
- ]
191
-
192
- def _info(self):
193
- name = self.config.name
194
- info = DEFINITIONS[name]
195
- if info["class_name"] in CLASS_MAPPING:
196
- self.child = CLASS_MAPPING[info["class_name"]](info)
197
- else:
198
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
199
- ret = self.child._info()
200
- return ret
201
-
202
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
203
- return self.child._split_generators(dl_manager=dl_manager)
204
-
205
- def _generate_examples(self, split_name, file_paths):
206
- return self.child._generate_examples(split_name, file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download_and_extract(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "CodeXGLUE NL-code-search-Adv dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_summary": {"dtype": "string", "id": null, "_type": "Value"}, "parameters": {"dtype": "string", "id": null, "_type": "Value"}, "return_statement": {"dtype": "string", "id": null, "_type": "Value"}, "argument_list": {"dtype": "string", "id": null, "_type": "Value"}, "identifier": {"dtype": "string", "id": null, "_type": "Value"}, "nwo": {"dtype": "string", "id": null, "_type": "Value"}, "score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_tc_nl_code_search_adv", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 820716084, "num_examples": 251820, "dataset_name": "code_x_glue_tc_nl_code_search_adv"}, "validation": {"name": "validation", "num_bytes": 23468834, "num_examples": 9604, "dataset_name": "code_x_glue_tc_nl_code_search_adv"}, "test": {"name": "test", "num_bytes": 47433760, "num_examples": 19210, "dataset_name": "code_x_glue_tc_nl_code_search_adv"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip": {"num_bytes": 940909997, "checksum": "7223c6460bebfa85697b586da91e47bc5d64790a4d60bba5917106458ab6b40e"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Code/NL-code-search-Adv/dataset.zip": {"num_bytes": 25115627, "checksum": "b4d5157699ca3bda7a33674f17d7b24294b4c8f36f650cea01d3d0dbcefdc656"}}, "download_size": 966025624, "post_processing_size": null, "dataset_size": 891618678, "size_in_bytes": 1857644302}}
 
 
default/code_x_glue_tc_nl_code_search_adv-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55052d00f2fe117b2ba4f74078b9ef14f22907aeb9f0a2743fc37dbcf9dd153e
3
+ size 16341073
default/code_x_glue_tc_nl_code_search_adv-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:597b53327a34b656f47fb2720a2914a3234ff7eae73668b7d9f6478a31f0e831
3
+ size 178069937
default/code_x_glue_tc_nl_code_search_adv-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3770b0de05664dc80d951bfbb2491cfe70150ffdf78881df1b77c3617b638a3
3
+ size 113210590
default/code_x_glue_tc_nl_code_search_adv-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b723992dbf671117e7b285c97690680ba5c1a4d9f776be03f29e483ac662cdf
3
+ size 8589049
generated_definitions.py DELETED
@@ -1,12 +0,0 @@
1
- DEFINITIONS = {
2
- "default": {
3
- "class_name": "CodeXGlueTcNLCodeSearchAdv",
4
- "dataset_type": "Text-Code",
5
- "description": "CodeXGLUE NL-code-search-Adv dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv",
6
- "dir_name": "NL-code-search-Adv",
7
- "name": "default",
8
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv",
9
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Code/NL-code-search-Adv",
10
- "sizes": {"test": 19210, "train": 251820, "validation": 9604},
11
- }
12
- }