Datasets:
parquet-converter
commited on
Commit
•
885331c
1
Parent(s):
6b21575
Update parquet files
Browse files- .gitattributes +0 -27
- README.md +0 -229
- code_x_glue_cc_code_completion_line.py +0 -80
- common.py +0 -75
- dataset_infos.json +0 -1
- generated_definitions.py +0 -24
- java/code_x_glue_cc_code_completion_line-train.parquet +3 -0
- python/code_x_glue_cc_code_completion_line-train.parquet +3 -0
.gitattributes
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,229 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- found
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- code
|
8 |
-
license:
|
9 |
-
- c-uda
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
size_categories:
|
13 |
-
- 1K<n<10K
|
14 |
-
- n<1K
|
15 |
-
source_datasets:
|
16 |
-
- original
|
17 |
-
task_categories:
|
18 |
-
- text-generation
|
19 |
-
- fill-mask
|
20 |
-
task_ids:
|
21 |
-
- slot-filling
|
22 |
-
pretty_name: CodeXGlueCcCodeCompletionLine
|
23 |
-
configs:
|
24 |
-
- go
|
25 |
-
- java
|
26 |
-
- javascript
|
27 |
-
- php
|
28 |
-
- python
|
29 |
-
- ruby
|
30 |
-
dataset_info:
|
31 |
-
- config_name: java
|
32 |
-
features:
|
33 |
-
- name: id
|
34 |
-
dtype: int32
|
35 |
-
- name: input
|
36 |
-
dtype: string
|
37 |
-
- name: gt
|
38 |
-
dtype: string
|
39 |
-
splits:
|
40 |
-
- name: train
|
41 |
-
num_bytes: 5454783
|
42 |
-
num_examples: 3000
|
43 |
-
download_size: 5523586
|
44 |
-
dataset_size: 5454783
|
45 |
-
- config_name: python
|
46 |
-
features:
|
47 |
-
- name: id
|
48 |
-
dtype: int32
|
49 |
-
- name: input
|
50 |
-
dtype: string
|
51 |
-
- name: gt
|
52 |
-
dtype: string
|
53 |
-
splits:
|
54 |
-
- name: train
|
55 |
-
num_bytes: 24021562
|
56 |
-
num_examples: 10000
|
57 |
-
download_size: 24266715
|
58 |
-
dataset_size: 24021562
|
59 |
-
---
|
60 |
-
# Dataset Card for "code_x_glue_cc_code_completion_line"
|
61 |
-
|
62 |
-
## Table of Contents
|
63 |
-
- [Dataset Description](#dataset-description)
|
64 |
-
- [Dataset Summary](#dataset-summary)
|
65 |
-
- [Supported Tasks and Leaderboards](#supported-tasks)
|
66 |
-
- [Languages](#languages)
|
67 |
-
- [Dataset Structure](#dataset-structure)
|
68 |
-
- [Data Instances](#data-instances)
|
69 |
-
- [Data Fields](#data-fields)
|
70 |
-
- [Data Splits](#data-splits-sample-size)
|
71 |
-
- [Dataset Creation](#dataset-creation)
|
72 |
-
- [Curation Rationale](#curation-rationale)
|
73 |
-
- [Source Data](#source-data)
|
74 |
-
- [Annotations](#annotations)
|
75 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
76 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
77 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
78 |
-
- [Discussion of Biases](#discussion-of-biases)
|
79 |
-
- [Other Known Limitations](#other-known-limitations)
|
80 |
-
- [Additional Information](#additional-information)
|
81 |
-
- [Dataset Curators](#dataset-curators)
|
82 |
-
- [Licensing Information](#licensing-information)
|
83 |
-
- [Citation Information](#citation-information)
|
84 |
-
- [Contributions](#contributions)
|
85 |
-
|
86 |
-
## Dataset Description
|
87 |
-
|
88 |
-
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
|
89 |
-
|
90 |
-
### Dataset Summary
|
91 |
-
|
92 |
-
CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
|
93 |
-
|
94 |
-
Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
|
95 |
-
We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
|
96 |
-
Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.
|
97 |
-
|
98 |
-
### Supported Tasks and Leaderboards
|
99 |
-
|
100 |
-
- `slot-filling`: The dataset can be used to train a model for completing entire code lines.
|
101 |
-
|
102 |
-
### Languages
|
103 |
-
|
104 |
-
- Java **programming** language
|
105 |
-
- Python **programming** language
|
106 |
-
|
107 |
-
## Dataset Structure
|
108 |
-
|
109 |
-
### Data Instances
|
110 |
-
|
111 |
-
#### java
|
112 |
-
|
113 |
-
An example of 'train' looks as follows.
|
114 |
-
```
|
115 |
-
{
|
116 |
-
"gt": "",
|
117 |
-
"id": 0,
|
118 |
-
"input": "<s> package org . rubypeople . rdt . internal . ui . rubyeditor ; import java . util . Iterator ; import org . eclipse . core . resources . IMarker ; import org . eclipse . ui . texteditor . MarkerAnnotation ; import org . eclipse . ui . texteditor . MarkerUtilities ; import org . rubypeople . rdt . core . IRubyElement ; import org . rubypeople . rdt . core . IRubyModelMarker ; import org . rubypeople . rdt . core . IRubyScript ; import org . rubypeople . rdt . core . RubyCore ; public class RubyMarkerAnnotation extends MarkerAnnotation implements IRubyAnnotation { public static final String RUBY_MARKER_TYPE_PREFIX = \"\" ; public static final String ERROR_ANNOTATION_TYPE = \"\" ; public static final String WARNING_ANNOTATION_TYPE = \"\" ; public static final String INFO_ANNOTATION_TYPE = \"\" ; public static final String TASK_ANNOTATION_TYPE = \"\" ; private IRubyAnnotation fOverlay ; public RubyMarkerAnnotation ( IMarker marker ) { super ( marker ) ; } public String [ ] getArguments ( ) { return null ; } public int getId ( ) { IMarker marker = getMarker ( ) ; if ( marker == null || ! marker . exists ( ) ) return - 1 ; if ( isProblem ( ) ) return marker . getAttribute ( IRubyModelMarker . ID , - 1 ) ; return - 1 ; } public boolean isProblem ( ) { String type = getType ( ) ; return WARNING_ANNOTATION_TYPE . equals ( type ) || ERROR_ANNOTATION_TYPE . equals"
|
119 |
-
}
|
120 |
-
```
|
121 |
-
|
122 |
-
#### python
|
123 |
-
|
124 |
-
An example of 'train' looks as follows.
|
125 |
-
```
|
126 |
-
{
|
127 |
-
"gt": "",
|
128 |
-
"id": 0,
|
129 |
-
"input": "<s> from __future__ import absolute_import <EOL> import weakref <EOL> import operator <EOL> from . compat import threading , itertools_filterfalse <EOL> from . import py2k <EOL> import types <EOL> EMPTY_SET = frozenset ( ) <EOL> class KeyedTuple ( tuple ) : <EOL> def __new__ ( cls , vals , labels = None ) : <EOL> t = tuple . __new__ ( cls , vals ) <EOL> t . _labels = [ ] <EOL> if labels : <EOL> t . __dict__ . update ( zip ( labels , vals ) ) <EOL> t . _labels = labels <EOL> return t <EOL> def keys ( self ) : <EOL> return [ l for l in self . _labels if l is not None ] <EOL> @ property <EOL> def _fields ( self ) : <EOL> return tuple ( self . keys ( ) ) <EOL> def _asdict ( self ) : <EOL> return dict ( ( key , self . __dict__ [ key ] ) for key in self . keys ( ) ) <EOL> class ImmutableContainer ( object ) : <EOL> def _immutable ( self , * arg , ** kw ) : <EOL> raise TypeError ( \"\" % self . __class__ . __name__ ) <EOL> __delitem__ = __setitem__ = __setattr__ = _immutable <EOL> class immutabledict ( ImmutableContainer , dict ) : <EOL> clear = pop = popitem = setdefault = update = ImmutableContainer . _immutable <EOL> def __new__ ( cls , * args ) : <EOL> new = dict . __new__ ( cls ) <EOL> dict . __init__ ( new , * args ) <EOL> return new <EOL> def __init__ ( self , * args ) : <EOL> pass <EOL> def __reduce__ ( self ) : <EOL> return immutabledict , ( dict ( self ) , ) <EOL> def union ( self , d ) : <EOL> if not self : <EOL> return immutabledict ( d ) <EOL> else : <EOL> d2 = immutabledict ( self ) <EOL> dict . update ( d2 , d ) <EOL> return d2 <EOL> def __repr__ ( self ) : <EOL> return \"\" % dict . __repr__ ( self ) <EOL> class Properties ( object ) : <EOL> def __init__ ( self , data ) : <EOL> self . __dict__ [ '_data' ] = data <EOL> def __len__ ( self ) : <EOL> return len ( self . _data ) <EOL> def __iter__ ( self ) : <EOL> return iter ( list ( self . _data . values ( ) ) ) <EOL> def __add__ ( self , other ) : <EOL> return list ( self ) + list ( other ) <EOL> def __setitem__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getitem__ ( self , key ) : <EOL> return self . _data [ key ] <EOL> def __delitem__ ( self , key ) : <EOL> del self . _data [ key ] <EOL> def __setattr__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getstate__ ( self ) : <EOL> return { '_data' : self . __dict__ [ '_data' ] } <EOL> def __setstate__ ( self , state ) : <EOL> self . __dict__ [ '_data' ] = state [ '_data' ] <EOL> def __getattr__ ( self , key ) : <EOL> try : <EOL> return self . _data [ key ] <EOL> except KeyError : <EOL> raise AttributeError ( key ) <EOL> def __contains__ ( self , key ) : <EOL> return key in self . _data <EOL> def as_immutable ( self ) : <EOL> return ImmutableProperties ( self . _data ) <EOL> def update ( self , value ) : <EOL> self . _data . update ( value ) <EOL> def get ( self , key , default = None ) : <EOL> if key in self : <EOL> return self [ key ] <EOL> else : <EOL> return default <EOL> def keys ( self ) : <EOL> return list ( self . _data ) <EOL> def values ( self ) : <EOL> return list ( self . _data . values ( ) ) <EOL> def items ( self ) : <EOL> return list ( self . _data . items ( ) ) <EOL> def has_key ( self , key ) : <EOL> return key in self . _data <EOL> def clear ( self ) : <EOL> self . _data . clear ( ) <EOL> class OrderedProperties ( Properties ) : <EOL> def __init__ ( self ) : <EOL> Properties . __init__ ( self , OrderedDict ( ) ) <EOL> class ImmutableProperties ( ImmutableContainer , Properties ) : <EOL> class OrderedDict ( dict ) : <EOL> def __init__ ( self , ____sequence = None , ** kwargs ) : <EOL> self . _list = [ ] <EOL> if ____sequence is None : <EOL> if kwargs : <EOL> self . update ( ** kwargs ) <EOL> else : <EOL> self . update ( ____sequence , ** kwargs ) <EOL> def clear ( self ) : <EOL> self . _list = [ ] <EOL> dict . clear ( self ) <EOL> def copy ( self ) : <EOL> return self . __copy__ ( ) <EOL> def __copy__ ( self ) : <EOL> return OrderedDict ( self ) <EOL> def sort ( self , * arg , ** kw ) : <EOL> self . _list . sort ( * arg , ** kw ) <EOL> def update ( self , ____sequence = None , ** kwargs ) : <EOL> if ____sequence is not None : <EOL> if hasattr ( ____sequence , 'keys' ) : <EOL> for key in ____sequence . keys ( ) : <EOL> self . __setitem__ ( key , ____sequence [ key ] ) <EOL> else : <EOL> for key , value in ____sequence : <EOL> self [ key ] = value <EOL> if kwargs : <EOL> self . update ( kwargs ) <EOL> def setdefault ( self , key , value ) : <EOL> if key not in self : <EOL> self . __setitem__ ( key , value ) <EOL> return value <EOL> else : <EOL> return self . __getitem__ ( key ) <EOL> def __iter__ ( self ) : <EOL> return iter ( self . _list ) <EOL> def keys ( self ) : <EOL> return list ( self ) <EOL> def values ( self ) : <EOL> return [ self [ key ] for key in self . _list ] <EOL> def items ( self ) : <EOL> return [ ( key , self [ key ] ) for key in self . _list ] <EOL> if py2k : <EOL> def itervalues ( self ) : <EOL> return iter ( self . values ( ) ) <EOL> def iterkeys ( self ) : <EOL> return iter ( self ) <EOL> def iteritems ( self ) : <EOL> return iter ( self . items ( ) ) <EOL> def __setitem__ ( self , key , object ) : <EOL> if key not in self : <EOL> try : <EOL> self . _list . append ( key ) <EOL> except AttributeError : <EOL> self . _list = [ key ] <EOL> dict . __setitem__ ( self , key , object ) <EOL> def __delitem__ ( self , key ) : <EOL> dict . __delitem__ ( self , key ) <EOL> self . _list . remove ( key ) <EOL> def pop ( self , key , * default ) : <EOL> present = key in self <EOL> value = dict . pop ( self , key , * default ) <EOL> if present : <EOL> self . _list . remove ( key ) <EOL> return value <EOL> def popitem ( self ) : <EOL> item = dict . popitem ( self ) <EOL> self . _list . remove ( item [ 0 ] ) <EOL> return item <EOL> class OrderedSet ( set ) : <EOL> def __init__ ( self , d = None ) : <EOL> set . __init__ ( self ) <EOL> self . _list = [ ] <EOL> if d is not None : <EOL>"
|
130 |
-
}
|
131 |
-
```
|
132 |
-
|
133 |
-
### Data Fields
|
134 |
-
|
135 |
-
In the following each data field in go is explained for each config. The data fields are the same among all splits.
|
136 |
-
|
137 |
-
#### java, python
|
138 |
-
|
139 |
-
|field name| type | description |
|
140 |
-
|----------|------|----------------------------|
|
141 |
-
|id |int32 | Index of the sample |
|
142 |
-
|input |string| Input code string |
|
143 |
-
|gt |string| Code string to be predicted|
|
144 |
-
|
145 |
-
### Data Splits
|
146 |
-
|
147 |
-
| name |train|
|
148 |
-
|------|----:|
|
149 |
-
|java | 3000|
|
150 |
-
|python|10000|
|
151 |
-
|
152 |
-
## Dataset Creation
|
153 |
-
|
154 |
-
### Curation Rationale
|
155 |
-
|
156 |
-
[More Information Needed]
|
157 |
-
|
158 |
-
### Source Data
|
159 |
-
|
160 |
-
#### Initial Data Collection and Normalization
|
161 |
-
|
162 |
-
[More Information Needed]
|
163 |
-
|
164 |
-
#### Who are the source language producers?
|
165 |
-
|
166 |
-
[More Information Needed]
|
167 |
-
|
168 |
-
### Annotations
|
169 |
-
|
170 |
-
#### Annotation process
|
171 |
-
|
172 |
-
[More Information Needed]
|
173 |
-
|
174 |
-
#### Who are the annotators?
|
175 |
-
|
176 |
-
[More Information Needed]
|
177 |
-
|
178 |
-
### Personal and Sensitive Information
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
## Considerations for Using the Data
|
183 |
-
|
184 |
-
### Social Impact of Dataset
|
185 |
-
|
186 |
-
[More Information Needed]
|
187 |
-
|
188 |
-
### Discussion of Biases
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
### Other Known Limitations
|
193 |
-
|
194 |
-
[More Information Needed]
|
195 |
-
|
196 |
-
## Additional Information
|
197 |
-
|
198 |
-
### Dataset Curators
|
199 |
-
|
200 |
-
https://github.com/microsoft, https://github.com/madlag
|
201 |
-
|
202 |
-
### Licensing Information
|
203 |
-
|
204 |
-
Computational Use of Data Agreement (C-UDA) License.
|
205 |
-
|
206 |
-
### Citation Information
|
207 |
-
|
208 |
-
```
|
209 |
-
@article{raychev2016probabilistic,
|
210 |
-
title={Probabilistic Model for Code with Decision Trees},
|
211 |
-
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
|
212 |
-
journal={ACM SIGPLAN Notices},
|
213 |
-
pages={731--747},
|
214 |
-
year={2016},
|
215 |
-
publisher={ACM New York, NY, USA}
|
216 |
-
}
|
217 |
-
@inproceedings{allamanis2013mining,
|
218 |
-
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
|
219 |
-
author={Allamanis, Miltiadis and Sutton, Charles},
|
220 |
-
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
|
221 |
-
pages={207--216},
|
222 |
-
year={2013},
|
223 |
-
organization={IEEE}
|
224 |
-
}
|
225 |
-
```
|
226 |
-
|
227 |
-
### Contributions
|
228 |
-
|
229 |
-
Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
code_x_glue_cc_code_completion_line.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
from typing import List
|
3 |
-
|
4 |
-
import datasets
|
5 |
-
|
6 |
-
from .common import Child
|
7 |
-
from .generated_definitions import DEFINITIONS
|
8 |
-
|
9 |
-
|
10 |
-
_DESCRIPTION = """Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
|
11 |
-
We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
|
12 |
-
Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion."""
|
13 |
-
|
14 |
-
_CITATION = """@article{raychev2016probabilistic,
|
15 |
-
title={Probabilistic Model for Code with Decision Trees},
|
16 |
-
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
|
17 |
-
journal={ACM SIGPLAN Notices},
|
18 |
-
pages={731--747},
|
19 |
-
year={2016},
|
20 |
-
publisher={ACM New York, NY, USA}
|
21 |
-
}
|
22 |
-
@inproceedings{allamanis2013mining,
|
23 |
-
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
|
24 |
-
author={Allamanis, Miltiadis and Sutton, Charles},
|
25 |
-
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
|
26 |
-
pages={207--216},
|
27 |
-
year={2013},
|
28 |
-
organization={IEEE}
|
29 |
-
}"""
|
30 |
-
|
31 |
-
|
32 |
-
class CodeXGlueCcCodeCompletionLineImpl(Child):
|
33 |
-
_DESCRIPTION = _DESCRIPTION
|
34 |
-
_CITATION = _CITATION
|
35 |
-
|
36 |
-
_FEATURES = {
|
37 |
-
"id": datasets.Value("int32"), # Index of the sample
|
38 |
-
"input": datasets.Value("string"), # Input code string
|
39 |
-
"gt": datasets.Value("string"), # Code string to be predicted
|
40 |
-
}
|
41 |
-
|
42 |
-
_SUPERVISED_KEYS = ["gt"]
|
43 |
-
|
44 |
-
def generate_urls(self, split_name):
|
45 |
-
yield "data", "test.json"
|
46 |
-
|
47 |
-
def _generate_examples(self, split_name, file_paths):
|
48 |
-
with open(file_paths["data"], encoding="utf-8") as f:
|
49 |
-
for idx, line in enumerate(f):
|
50 |
-
entry = json.loads(line)
|
51 |
-
entry["id"] = idx
|
52 |
-
yield idx, entry
|
53 |
-
|
54 |
-
|
55 |
-
CLASS_MAPPING = {
|
56 |
-
"CodeXGlueCcCodeCompletionLine": CodeXGlueCcCodeCompletionLineImpl,
|
57 |
-
}
|
58 |
-
|
59 |
-
|
60 |
-
class CodeXGlueCcCodeCompletionLine(datasets.GeneratorBasedBuilder):
|
61 |
-
BUILDER_CONFIG_CLASS = datasets.BuilderConfig
|
62 |
-
BUILDER_CONFIGS = [
|
63 |
-
datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
|
64 |
-
]
|
65 |
-
|
66 |
-
def _info(self):
|
67 |
-
name = self.config.name
|
68 |
-
info = DEFINITIONS[name]
|
69 |
-
if info["class_name"] in CLASS_MAPPING:
|
70 |
-
self.child = CLASS_MAPPING[info["class_name"]](info)
|
71 |
-
else:
|
72 |
-
raise RuntimeError(f"Unknown python class for dataset configuration {name}")
|
73 |
-
ret = self.child._info()
|
74 |
-
return ret
|
75 |
-
|
76 |
-
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
|
77 |
-
return self.child._split_generators(dl_manager=dl_manager)
|
78 |
-
|
79 |
-
def _generate_examples(self, split_name, file_paths):
|
80 |
-
return self.child._generate_examples(split_name, file_paths)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
common.py
DELETED
@@ -1,75 +0,0 @@
|
|
1 |
-
from typing import List
|
2 |
-
|
3 |
-
import datasets
|
4 |
-
|
5 |
-
|
6 |
-
# Citation, taken from https://github.com/microsoft/CodeXGLUE
|
7 |
-
_DEFAULT_CITATION = """@article{CodeXGLUE,
|
8 |
-
title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
|
9 |
-
year={2020},}"""
|
10 |
-
|
11 |
-
|
12 |
-
class Child:
|
13 |
-
_DESCRIPTION = None
|
14 |
-
_FEATURES = None
|
15 |
-
_CITATION = None
|
16 |
-
SPLITS = {"train": datasets.Split.TRAIN}
|
17 |
-
_SUPERVISED_KEYS = None
|
18 |
-
|
19 |
-
def __init__(self, info):
|
20 |
-
self.info = info
|
21 |
-
|
22 |
-
def homepage(self):
|
23 |
-
return self.info["project_url"]
|
24 |
-
|
25 |
-
def _info(self):
|
26 |
-
# This is the description that will appear on the datasets page.
|
27 |
-
return datasets.DatasetInfo(
|
28 |
-
description=self.info["description"] + "\n\n" + self._DESCRIPTION,
|
29 |
-
features=datasets.Features(self._FEATURES),
|
30 |
-
homepage=self.homepage(),
|
31 |
-
citation=self._CITATION or _DEFAULT_CITATION,
|
32 |
-
supervised_keys=self._SUPERVISED_KEYS,
|
33 |
-
)
|
34 |
-
|
35 |
-
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
|
36 |
-
SPLITS = self.SPLITS
|
37 |
-
_URL = self.info["raw_url"]
|
38 |
-
urls_to_download = {}
|
39 |
-
for split in SPLITS:
|
40 |
-
if split not in urls_to_download:
|
41 |
-
urls_to_download[split] = {}
|
42 |
-
|
43 |
-
for key, url in self.generate_urls(split):
|
44 |
-
if not url.startswith("http"):
|
45 |
-
url = _URL + "/" + url
|
46 |
-
urls_to_download[split][key] = url
|
47 |
-
|
48 |
-
downloaded_files = {}
|
49 |
-
for k, v in urls_to_download.items():
|
50 |
-
downloaded_files[k] = dl_manager.download_and_extract(v)
|
51 |
-
|
52 |
-
return [
|
53 |
-
datasets.SplitGenerator(
|
54 |
-
name=SPLITS[k],
|
55 |
-
gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
|
56 |
-
)
|
57 |
-
for k in SPLITS
|
58 |
-
]
|
59 |
-
|
60 |
-
def check_empty(self, entries):
|
61 |
-
all_empty = all([v == "" for v in entries.values()])
|
62 |
-
all_non_empty = all([v != "" for v in entries.values()])
|
63 |
-
|
64 |
-
if not all_non_empty and not all_empty:
|
65 |
-
raise RuntimeError("Parallel data files should have the same number of lines.")
|
66 |
-
|
67 |
-
return all_empty
|
68 |
-
|
69 |
-
|
70 |
-
class TrainValidTestChild(Child):
|
71 |
-
SPLITS = {
|
72 |
-
"train": datasets.Split.TRAIN,
|
73 |
-
"valid": datasets.Split.VALIDATION,
|
74 |
-
"test": datasets.Split.TEST,
|
75 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"java": {"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "citation": "@article{raychev2016probabilistic,\ntitle={Probabilistic Model for Code with Decision Trees},\nauthor={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\njournal={ACM SIGPLAN Notices},\npages={731--747},\nyear={2016},\npublisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\ntitle={Mining Source Code Repositories at Massive Scale using Language Modeling},\nauthor={Allamanis, Miltiadis and Sutton, Charles},\nbooktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\npages={207--216},\nyear={2013},\norganization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "input": {"dtype": "string", "id": null, "_type": "Value"}, "gt": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "gt", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_line", "config_name": "java", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5454783, "num_examples": 3000, "dataset_name": "code_x_glue_cc_code_completion_line"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/javaCorpus/line_completion/test.json": {"num_bytes": 5523586, "checksum": "188e4ae5a8751871adb50fe48e8f1d50c6e2dca778fe53ff03c13b5a63f132af"}}, "download_size": 5523586, "post_processing_size": null, "dataset_size": 5454783, "size_in_bytes": 10978369}, "python": {"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "citation": "@article{raychev2016probabilistic,\ntitle={Probabilistic Model for Code with Decision Trees},\nauthor={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},\njournal={ACM SIGPLAN Notices},\npages={731--747},\nyear={2016},\npublisher={ACM New York, NY, USA}\n}\n@inproceedings{allamanis2013mining,\ntitle={Mining Source Code Repositories at Massive Scale using Language Modeling},\nauthor={Allamanis, Miltiadis and Sutton, Charles},\nbooktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\npages={207--216},\nyear={2013},\norganization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "input": {"dtype": "string", "id": null, "_type": "Value"}, "gt": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "gt", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_completion_line", "config_name": "python", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 24021562, "num_examples": 10000, "dataset_name": "code_x_glue_cc_code_completion_line"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/py150/line_completion/test.json": {"num_bytes": 24266715, "checksum": "39cb31c2263b25506d94384e9ace954cf3ec8d1fd7a4b7f62beb0c3846e5555c"}}, "download_size": 24266715, "post_processing_size": null, "dataset_size": 24021562, "size_in_bytes": 48288277}}
|
|
|
|
generated_definitions.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
DEFINITIONS = {
|
2 |
-
"java": {
|
3 |
-
"class_name": "CodeXGlueCcCodeCompletionLine",
|
4 |
-
"dataset_type": "Code-Code",
|
5 |
-
"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
|
6 |
-
"dir_name": "CodeCompletion-line",
|
7 |
-
"name": "java",
|
8 |
-
"parameters": {"language": "java", "original_language_name": "javaCorpus"},
|
9 |
-
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
|
10 |
-
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/javaCorpus/line_completion",
|
11 |
-
"sizes": {"train": 3000},
|
12 |
-
},
|
13 |
-
"python": {
|
14 |
-
"class_name": "CodeXGlueCcCodeCompletionLine",
|
15 |
-
"dataset_type": "Code-Code",
|
16 |
-
"description": "CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
|
17 |
-
"dir_name": "CodeCompletion-line",
|
18 |
-
"name": "python",
|
19 |
-
"parameters": {"language": "python", "original_language_name": "py150"},
|
20 |
-
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line",
|
21 |
-
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/CodeCompletion-line/dataset/py150/line_completion",
|
22 |
-
"sizes": {"train": 10000},
|
23 |
-
},
|
24 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
java/code_x_glue_cc_code_completion_line-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ddf258b914ada27cf327b8bdd3e440c0f641aaa8316975282afd9d6149d5d65
|
3 |
+
size 1696678
|
python/code_x_glue_cc_code_completion_line-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5565acfa660afc8f78f799b15b4345988c6caeebe241e155f01e573ff65f3d6f
|
3 |
+
size 8140669
|