system HF staff commited on
Commit
1063605
0 Parent(s):

Update files from the datasets library (from 1.8.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.8.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - code
8
+ licenses:
9
+ - other-C-UDA
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ go:
14
+ - n<1K
15
+ java:
16
+ - n<1K
17
+ javascript:
18
+ - n<1K
19
+ php:
20
+ - n<1K
21
+ python:
22
+ - 1K<n<10K
23
+ ruby:
24
+ - n<1K
25
+ source_datasets:
26
+ - original
27
+ task_categories:
28
+ - sequence-modeling
29
+ task_ids:
30
+ - slot-filling
31
+ ---
32
+ # Dataset Card for "code_x_glue_cc_code_completion_line"
33
+
34
+ ## Table of Contents
35
+ - [Dataset Description](#dataset-description)
36
+ - [Dataset Summary](#dataset-summary)
37
+ - [Supported Tasks and Leaderboards](#supported-tasks)
38
+ - [Languages](#languages)
39
+ - [Dataset Structure](#dataset-structure)
40
+ - [Data Instances](#data-instances)
41
+ - [Data Fields](#data-fields)
42
+ - [Data Splits](#data-splits-sample-size)
43
+ - [Dataset Creation](#dataset-creation)
44
+ - [Curation Rationale](#curation-rationale)
45
+ - [Source Data](#source-data)
46
+ - [Annotations](#annotations)
47
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
48
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
49
+ - [Social Impact of Dataset](#social-impact-of-dataset)
50
+ - [Discussion of Biases](#discussion-of-biases)
51
+ - [Other Known Limitations](#other-known-limitations)
52
+ - [Additional Information](#additional-information)
53
+ - [Dataset Curators](#dataset-curators)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+ - [Contributions](#contributions)
57
+
58
+ ## Dataset Description
59
+
60
+ - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
61
+
62
+ ### Dataset Summary
63
+
64
+ CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
65
+
66
+ Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
67
+ We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
68
+ Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ - `slot-filling`: The dataset can be used to train a model for completing entire code lines.
73
+
74
+ ### Languages
75
+
76
+ - Java **programming** language
77
+ - Python **programming** language
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ #### java
84
+
85
+ An example of 'train' looks as follows.
86
+ ```
87
+ {
88
+ "gt": "",
89
+ "id": 0,
90
+ "input": "<s> package org . rubypeople . rdt . internal . ui . rubyeditor ; import java . util . Iterator ; import org . eclipse . core . resources . IMarker ; import org . eclipse . ui . texteditor . MarkerAnnotation ; import org . eclipse . ui . texteditor . MarkerUtilities ; import org . rubypeople . rdt . core . IRubyElement ; import org . rubypeople . rdt . core . IRubyModelMarker ; import org . rubypeople . rdt . core . IRubyScript ; import org . rubypeople . rdt . core . RubyCore ; public class RubyMarkerAnnotation extends MarkerAnnotation implements IRubyAnnotation { public static final String RUBY_MARKER_TYPE_PREFIX = \"\" ; public static final String ERROR_ANNOTATION_TYPE = \"\" ; public static final String WARNING_ANNOTATION_TYPE = \"\" ; public static final String INFO_ANNOTATION_TYPE = \"\" ; public static final String TASK_ANNOTATION_TYPE = \"\" ; private IRubyAnnotation fOverlay ; public RubyMarkerAnnotation ( IMarker marker ) { super ( marker ) ; } public String [ ] getArguments ( ) { return null ; } public int getId ( ) { IMarker marker = getMarker ( ) ; if ( marker == null || ! marker . exists ( ) ) return - 1 ; if ( isProblem ( ) ) return marker . getAttribute ( IRubyModelMarker . ID , - 1 ) ; return - 1 ; } public boolean isProblem ( ) { String type = getType ( ) ; return WARNING_ANNOTATION_TYPE . equals ( type ) || ERROR_ANNOTATION_TYPE . equals"
91
+ }
92
+ ```
93
+
94
+ #### python
95
+
96
+ An example of 'train' looks as follows.
97
+ ```
98
+ {
99
+ "gt": "",
100
+ "id": 0,
101
+ "input": "<s> from __future__ import absolute_import <EOL> import weakref <EOL> import operator <EOL> from . compat import threading , itertools_filterfalse <EOL> from . import py2k <EOL> import types <EOL> EMPTY_SET = frozenset ( ) <EOL> class KeyedTuple ( tuple ) : <EOL> def __new__ ( cls , vals , labels = None ) : <EOL> t = tuple . __new__ ( cls , vals ) <EOL> t . _labels = [ ] <EOL> if labels : <EOL> t . __dict__ . update ( zip ( labels , vals ) ) <EOL> t . _labels = labels <EOL> return t <EOL> def keys ( self ) : <EOL> return [ l for l in self . _labels if l is not None ] <EOL> @ property <EOL> def _fields ( self ) : <EOL> return tuple ( self . keys ( ) ) <EOL> def _asdict ( self ) : <EOL> return dict ( ( key , self . __dict__ [ key ] ) for key in self . keys ( ) ) <EOL> class ImmutableContainer ( object ) : <EOL> def _immutable ( self , * arg , ** kw ) : <EOL> raise TypeError ( \"\" % self . __class__ . __name__ ) <EOL> __delitem__ = __setitem__ = __setattr__ = _immutable <EOL> class immutabledict ( ImmutableContainer , dict ) : <EOL> clear = pop = popitem = setdefault = update = ImmutableContainer . _immutable <EOL> def __new__ ( cls , * args ) : <EOL> new = dict . __new__ ( cls ) <EOL> dict . __init__ ( new , * args ) <EOL> return new <EOL> def __init__ ( self , * args ) : <EOL> pass <EOL> def __reduce__ ( self ) : <EOL> return immutabledict , ( dict ( self ) , ) <EOL> def union ( self , d ) : <EOL> if not self : <EOL> return immutabledict ( d ) <EOL> else : <EOL> d2 = immutabledict ( self ) <EOL> dict . update ( d2 , d ) <EOL> return d2 <EOL> def __repr__ ( self ) : <EOL> return \"\" % dict . __repr__ ( self ) <EOL> class Properties ( object ) : <EOL> def __init__ ( self , data ) : <EOL> self . __dict__ [ '_data' ] = data <EOL> def __len__ ( self ) : <EOL> return len ( self . _data ) <EOL> def __iter__ ( self ) : <EOL> return iter ( list ( self . _data . values ( ) ) ) <EOL> def __add__ ( self , other ) : <EOL> return list ( self ) + list ( other ) <EOL> def __setitem__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getitem__ ( self , key ) : <EOL> return self . _data [ key ] <EOL> def __delitem__ ( self , key ) : <EOL> del self . _data [ key ] <EOL> def __setattr__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getstate__ ( self ) : <EOL> return { '_data' : self . __dict__ [ '_data' ] } <EOL> def __setstate__ ( self , state ) : <EOL> self . __dict__ [ '_data' ] = state [ '_data' ] <EOL> def __getattr__ ( self , key ) : <EOL> try : <EOL> return self . _data [ key ] <EOL> except KeyError : <EOL> raise AttributeError ( key ) <EOL> def __contains__ ( self , key ) : <EOL> return key in self . _data <EOL> def as_immutable ( self ) : <EOL> return ImmutableProperties ( self . _data ) <EOL> def update ( self , value ) : <EOL> self . _data . update ( value ) <EOL> def get ( self , key , default = None ) : <EOL> if key in self : <EOL> return self [ key ] <EOL> else : <EOL> return default <EOL> def keys ( self ) : <EOL> return list ( self . _data ) <EOL> def values ( self ) : <EOL> return list ( self . _data . values ( ) ) <EOL> def items ( self ) : <EOL> return list ( self . _data . items ( ) ) <EOL> def has_key ( self , key ) : <EOL> return key in self . _data <EOL> def clear ( self ) : <EOL> self . _data . clear ( ) <EOL> class OrderedProperties ( Properties ) : <EOL> def __init__ ( self ) : <EOL> Properties . __init__ ( self , OrderedDict ( ) ) <EOL> class ImmutableProperties ( ImmutableContainer , Properties ) : <EOL> class OrderedDict ( dict ) : <EOL> def __init__ ( self , ____sequence = None , ** kwargs ) : <EOL> self . _list = [ ] <EOL> if ____sequence is None : <EOL> if kwargs : <EOL> self . update ( ** kwargs ) <EOL> else : <EOL> self . update ( ____sequence , ** kwargs ) <EOL> def clear ( self ) : <EOL> self . _list = [ ] <EOL> dict . clear ( self ) <EOL> def copy ( self ) : <EOL> return self . __copy__ ( ) <EOL> def __copy__ ( self ) : <EOL> return OrderedDict ( self ) <EOL> def sort ( self , * arg , ** kw ) : <EOL> self . _list . sort ( * arg , ** kw ) <EOL> def update ( self , ____sequence = None , ** kwargs ) : <EOL> if ____sequence is not None : <EOL> if hasattr ( ____sequence , 'keys' ) : <EOL> for key in ____sequence . keys ( ) : <EOL> self . __setitem__ ( key , ____sequence [ key ] ) <EOL> else : <EOL> for key , value in ____sequence : <EOL> self [ key ] = value <EOL> if kwargs : <EOL> self . update ( kwargs ) <EOL> def setdefault ( self , key , value ) : <EOL> if key not in self : <EOL> self . __setitem__ ( key , value ) <EOL> return value <EOL> else : <EOL> return self . __getitem__ ( key ) <EOL> def __iter__ ( self ) : <EOL> return iter ( self . _list ) <EOL> def keys ( self ) : <EOL> return list ( self ) <EOL> def values ( self ) : <EOL> return [ self [ key ] for key in self . _list ] <EOL> def items ( self ) : <EOL> return [ ( key , self [ key ] ) for key in self . _list ] <EOL> if py2k : <EOL> def itervalues ( self ) : <EOL> return iter ( self . values ( ) ) <EOL> def iterkeys ( self ) : <EOL> return iter ( self ) <EOL> def iteritems ( self ) : <EOL> return iter ( self . items ( ) ) <EOL> def __setitem__ ( self , key , object ) : <EOL> if key not in self : <EOL> try : <EOL> self . _list . append ( key ) <EOL> except AttributeError : <EOL> self . _list = [ key ] <EOL> dict . __setitem__ ( self , key , object ) <EOL> def __delitem__ ( self , key ) : <EOL> dict . __delitem__ ( self , key ) <EOL> self . _list . remove ( key ) <EOL> def pop ( self , key , * default ) : <EOL> present = key in self <EOL> value = dict . pop ( self , key , * default ) <EOL> if present : <EOL> self . _list . remove ( key ) <EOL> return value <EOL> def popitem ( self ) : <EOL> item = dict . popitem ( self ) <EOL> self . _list . remove ( item [ 0 ] ) <EOL> return item <EOL> class OrderedSet ( set ) : <EOL> def __init__ ( self , d = None ) : <EOL> set . __init__ ( self ) <EOL> self . _list = [ ] <EOL> if d is not None : <EOL>"
102
+ }
103
+ ```
104
+
105
+ ### Data Fields
106
+
107
+ In the following each data field in go is explained for each config. The data fields are the same among all splits.
108
+
109
+ #### java, python
110
+
111
+ |field name| type | description |
112
+ |----------|------|----------------------------|
113
+ |id |int32 | Index of the sample |
114
+ |input |string| Input code string |
115
+ |gt |string| Code string to be predicted|
116
+
117
+ ### Data Splits
118
+
119
+ | name |train|
120
+ |------|----:|
121
+ |java | 3000|
122
+ |python|10000|
123
+
124
+ ## Dataset Creation
125
+
126
+ ### Curation Rationale
127
+
128
+ [More Information Needed]
129
+
130
+ ### Source Data
131
+
132
+ #### Initial Data Collection and Normalization
133
+
134
+ [More Information Needed]
135
+
136
+ #### Who are the source language producers?
137
+
138
+ [More Information Needed]
139
+
140
+ ### Annotations
141
+
142
+ #### Annotation process
143
+
144
+ [More Information Needed]
145
+
146
+ #### Who are the annotators?
147
+
148
+ [More Information Needed]
149
+
150
+ ### Personal and Sensitive Information
151
+
152
+ [More Information Needed]
153
+
154
+ ## Considerations for Using the Data
155
+
156
+ ### Social Impact of Dataset
157
+
158
+ [More Information Needed]
159
+
160
+ ### Discussion of Biases
161
+
162
+ [More Information Needed]
163
+
164
+ ### Other Known Limitations
165
+
166
+ [More Information Needed]
167
+
168
+ ## Additional Information
169
+
170
+ ### Dataset Curators
171
+
172
+ https://github.com/microsoft, https://github.com/madlag
173
+
174
+ ### Licensing Information
175
+
176
+ Computational Use of Data Agreement (C-UDA) License.
177
+
178
+ ### Citation Information
179
+
180
+ ```
181
+ @article{raychev2016probabilistic,
182
+ title={Probabilistic Model for Code with Decision Trees},
183
+ author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
184
+ journal={ACM SIGPLAN Notices},
185
+ pages={731--747},
186
+ year={2016},
187
+ publisher={ACM New York, NY, USA}
188
+ }
189
+ @inproceedings{allamanis2013mining,
190
+ title={Mining Source Code Repositories at Massive Scale using Language Modeling},
191
+ author={Allamanis, Miltiadis and Sutton, Charles},
192
+ booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
193
+ pages={207--216},
194
+ year={2013},
195
+ organization={IEEE}
196
+ }
197
+ ```
198
+
199
+ ### Contributions
200
+
201
+ Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_completion_line.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from typing import List
3
+
4
+ import datasets
5
+
6
+ from .common import Child
7
+ from .generated_definitions import DEFINITIONS
8
+
9
+
10
+ _DESCRIPTION = """Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
11
+ We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
12
+ Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion."""
13
+
14
+ _CITATION = """@article{raychev2016probabilistic,
15
+ title={Probabilistic Model for Code with Decision Trees},
16
+ author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
17
+ journal={ACM SIGPLAN Notices},
18
+ pages={731--747},
19
+ year={2016},
20
+ publisher={ACM New York, NY, USA}
21
+ }
22
+ @inproceedings{allamanis2013mining,
23
+ title={Mining Source Code Repositories at Massive Scale using Language Modeling},
24
+ author={Allamanis, Miltiadis and Sutton, Charles},
25
+ booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
26
+ pages={207--216},
27
+ year={2013},
28
+ organization={IEEE}
29
+ }"""
30
+
31
+
32
+ class CodeXGlueCcCodeCompletionLineImpl(Child):
33
+ _DESCRIPTION = _DESCRIPTION
34
+ _CITATION = _CITATION
35
+
36
+ _FEATURES = {
37
+ "id": datasets.Value("int32"), # Index of the sample
38
+ "input": datasets.Value("string"), # Input code string
39
+ "gt": datasets.Value("string"), # Code string to be predicted
40
+ }
41
+
42
+ _SUPERVISED_KEYS = ["gt"]
43
+
44
+ def generate_urls(self, split_name):
45
+ yield "data", "test.json"
46
+
47
+ def _generate_examples(self, split_name, file_paths):
48
+ with open(file_paths["data"], encoding="utf-8") as f:
49
+ for idx, line in enumerate(f):
50
+ entry = json.loads(line)
51
+ entry["id"] = idx
52
+ yield idx, entry
53
+
54
+
55
+ CLASS_MAPPING = {
56
+ "CodeXGlueCcCodeCompletionLine": CodeXGlueCcCodeCompletionLineImpl,
57
+ }
58
+
59
+
60
+ class CodeXGlueCcCodeCompletionLine(datasets.GeneratorBasedBuilder):
61
+ BUILDER_CONFIG_CLASS = datasets.BuilderConfig
62
+ BUILDER_CONFIGS = [
63
+ datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
64
+ ]
65
+
66
+ def _info(self):
67
+ name = self.config.name
68
+ info = DEFINITIONS[name]
69
+ if info["class_name"] in CLASS_MAPPING:
70
+ self.child = CLASS_MAPPING[info["class_name"]](info)
71
+ else:
72
+ raise RuntimeError(f"Unknown python class for dataset configuration {name}")
73
+ ret = self.child._info()
74
+ return ret
75
+
76
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
77
+ return self.child._split_generators(dl_manager=dl_manager)
78
+
79
+ def _generate_examples(self, split_name, file_paths):
80
+ return self.child._generate_examples(split_name, file_paths)
common.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+ import datasets
4
+
5
+
6
+ # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
+ _DEFAULT_CITATION = """@article{CodeXGLUE,
8
+ title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
+ year={2020},}"""
10
+
11
+
12
+ class Child:
13
+ _DESCRIPTION = None
14
+ _FEATURES = None
15
+ _CITATION = None
16
+ SPLITS = {"train": datasets.Split.TRAIN}
17
+ _SUPERVISED_KEYS = None
18
+
19
+ def __init__(self, info):
20
+ self.info = info
21
+
22
+ def homepage(self):
23
+ return self.info["project_url"]
24
+
25
+ def _info(self):
26
+ # This is the description that will appear on the datasets page.
27
+ return datasets.DatasetInfo(
28
+ description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
+ features=datasets.Features(self._FEATURES),
30
+ homepage=self.homepage(),
31
+ citation=self._CITATION or _DEFAULT_CITATION,
32
+ supervised_keys=self._SUPERVISED_KEYS,
33
+ )
34
+
35
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
+ SPLITS = self.SPLITS
37
+ _URL = self.info["raw_url"]
38
+ urls_to_download = {}
39
+ for split in SPLITS:
40
+ if split not in urls_to_download:
41
+ urls_to_download[split] = {}
42
+
43
+ for key, url in self.generate_urls(split):
44
+ if not url.startswith("http"):
45
+ url = _URL + "/" + url
46
+ urls_to_download[split][key] = url
47
+
48
+ downloaded_files = {}
49
+ for k, v in urls_to_download.items():
50
+ downloaded_files[k] = dl_manager.download_and_extract(v)
51
+
52
+ return [
53
+ datasets.SplitGenerator(
54
+ name=SPLITS[k],
55
+ gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
+ )
57
+ for k in SPLITS
58
+ ]
59
+
60
+ def check_empty(self, entries):
61
+ all_empty = all([v == "" for v in entries.values()])
62
+ all_non_empty = all([v != "" for v in entries.values()])
63
+
64
+ if not all_non_empty and not all_empty:
65
+ raise RuntimeError("Parallel data files should have the same number of lines.")
66
+
67
+ return all_empty
68
+
69
+
70
+ class TrainValidTestChild(Child):
71
+ SPLITS = {
72
+ "train": datasets.Split.TRAIN,
73
+ "valid": datasets.Split.VALIDATION,
74
+ "test": datasets.Split.TEST,
75
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"java": {"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "citation": "@article{raychev2016probabilistic,\ntitle={Probabilistic Model for Code with Decision Trees},\nauthor={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\njournal={ACM SIGPLAN Notices},\npages={731--747},\nyear={2016},\npublisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\ntitle={Mining Source Code Repositories at Massive Scale using Language Modeling},\nauthor={Allamanis, Miltiadis and Sutton, Charles},\nbooktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\npages={207--216},\nyear={2013},\norganization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "input": {"dtype": "string", "id": null, "_type": "Value"}, "gt": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "gt", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_line", "config_name": "java", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5454783, "num_examples": 3000, "dataset_name": "code_x_glue_cc_code_completion_line"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/javaCorpus/line_completion/test.json": {"num_bytes": 5523586, "checksum": "188e4ae5a8751871adb50fe48e8f1d50c6e2dca778fe53ff03c13b5a63f132af"}}, "download_size": 5523586, "post_processing_size": null, "dataset_size": 5454783, "size_in_bytes": 10978369}, "python": {"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "citation": "@article{raychev2016probabilistic,\ntitle={Probabilistic Model for Code with Decision Trees},\nauthor={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\njournal={ACM SIGPLAN Notices},\npages={731--747},\nyear={2016},\npublisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\ntitle={Mining Source Code Repositories at Massive Scale using Language Modeling},\nauthor={Allamanis, Miltiadis and Sutton, Charles},\nbooktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\npages={207--216},\nyear={2013},\norganization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "input": {"dtype": "string", "id": null, "_type": "Value"}, "gt": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "gt", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_line", "config_name": "python", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 24021562, "num_examples": 10000, "dataset_name": "code_x_glue_cc_code_completion_line"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/py150/line_completion/test.json": {"num_bytes": 24266715, "checksum": "39cb31c2263b25506d94384e9ace954cf3ec8d1fd7a4b7f62beb0c3846e5555c"}}, "download_size": 24266715, "post_processing_size": null, "dataset_size": 24021562, "size_in_bytes": 48288277}}
dummy/java/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ce45e39220c9a25f646cde5fa472460a0d35e32cac57cdc4bba8ce217894112
3
+ size 1272
dummy/python/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4868dd02a8c879b3c7e18e1b8dac99536119a488d3bfec7d45aa22e41f85f179
3
+ size 4191
generated_definitions.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DEFINITIONS = {
2
+ "java": {
3
+ "class_name": "CodeXGlueCcCodeCompletionLine",
4
+ "dataset_type": "Code-Code",
5
+ "description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
6
+ "dir_name": "CodeCompletion-line",
7
+ "name": "java",
8
+ "parameters": {"language": "java", "original_language_name": "javaCorpus"},
9
+ "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
10
+ "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/javaCorpus/line_completion",
11
+ "sizes": {"train": 3000},
12
+ },
13
+ "python": {
14
+ "class_name": "CodeXGlueCcCodeCompletionLine",
15
+ "dataset_type": "Code-Code",
16
+ "description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
17
+ "dir_name": "CodeCompletion-line",
18
+ "name": "python",
19
+ "parameters": {"language": "python", "original_language_name": "py150"},
20
+ "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
21
+ "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/py150/line_completion",
22
+ "sizes": {"train": 10000},
23
+ },
24
+ }