Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
73a0727
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +142 -0
  3. dataset_infos.json +0 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. great_code.py +162 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1M<n<5M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - table-to-text
20
+ ---
21
+
22
+ # Dataset Card Creation Guide
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** None
50
+ - **Repository:** https://github.com/google-research-datasets/great
51
+ - **Paper:** https://openreview.net/forum?id=B1lnbRNtwr
52
+ - **Leaderboard:** [More Information Needed]
53
+ - **Point of Contact:** [More Information Needed]
54
+
55
+ ### Dataset Summary
56
+
57
+ [More Information Needed]
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+
65
+ [More Information Needed]
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ Here are some examples of questions and facts:
72
+
73
+
74
+ ### Data Fields
75
+
76
+ [More Information Needed]
77
+
78
+ ### Data Splits
79
+
80
+ [More Information Needed]
81
+
82
+ ## Dataset Creation
83
+
84
+ ### Curation Rationale
85
+
86
+ [More Information Needed]
87
+
88
+ ### Source Data
89
+
90
+ [More Information Needed]
91
+
92
+ #### Initial Data Collection and Normalization
93
+
94
+ [More Information Needed]
95
+
96
+ #### Who are the source language producers?
97
+
98
+ [More Information Needed]
99
+
100
+ ### Annotations
101
+
102
+ [More Information Needed]
103
+
104
+ #### Annotation process
105
+
106
+ [More Information Needed]
107
+
108
+ #### Who are the annotators?
109
+
110
+ [More Information Needed]
111
+
112
+ ### Personal and Sensitive Information
113
+
114
+ [More Information Needed]
115
+
116
+ ## Considerations for Using the Data
117
+
118
+ ### Social Impact of Dataset
119
+
120
+ [More Information Needed]
121
+
122
+ ### Discussion of Biases
123
+
124
+ [More Information Needed]
125
+
126
+ ### Other Known Limitations
127
+
128
+ [More Information Needed]
129
+
130
+ ## Additional Information
131
+
132
+ ### Dataset Curators
133
+
134
+ [More Information Needed]
135
+
136
+ ### Licensing Information
137
+
138
+ [More Information Needed]
139
+
140
+ ### Citation Information
141
+
142
+ [More Information Needed]
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:455dee21b3220e69db465c09bca16900da304dab2673f20dc4dd4ca942a3e4f4
3
+ size 3650
great_code.py ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ import json
18
+
19
+ import datasets
20
+
21
+
22
+ _DESCRIPTION = """\
23
+ The dataset for the variable-misuse task, described in the ICLR 2020 paper 'Global Relational Models of Source Code' [https://openreview.net/forum?id=B1lnbRNtwr]
24
+
25
+ This is the public version of the dataset used in that paper. The original, used to produce the graphs in the paper, could not be open-sourced due to licensing issues. See the public associated code repository [https://github.com/VHellendoorn/ICLR20-Great] for results produced from this dataset.
26
+
27
+ This dataset was generated synthetically from the corpus of Python code in the ETH Py150 Open dataset [https://github.com/google-research-datasets/eth_py150_open].
28
+ """
29
+ _HOMEPAGE_URL = ""
30
+ _CITATION = """\
31
+ @inproceedings{DBLP:conf/iclr/HellendoornSSMB20,
32
+ author = {Vincent J. Hellendoorn and
33
+ Charles Sutton and
34
+ Rishabh Singh and
35
+ Petros Maniatis and
36
+ David Bieber},
37
+ title = {Global Relational Models of Source Code},
38
+ booktitle = {8th International Conference on Learning Representations, {ICLR} 2020,
39
+ Addis Ababa, Ethiopia, April 26-30, 2020},
40
+ publisher = {OpenReview.net},
41
+ year = {2020},
42
+ url = {https://openreview.net/forum?id=B1lnbRNtwr},
43
+ timestamp = {Thu, 07 May 2020 17:11:47 +0200},
44
+ biburl = {https://dblp.org/rec/conf/iclr/HellendoornSSMB20.bib},
45
+ bibsource = {dblp computer science bibliography, https://dblp.org}
46
+ }
47
+ """
48
+ _TRAIN_URLS = [
49
+ f"https://raw.githubusercontent.com/google-research-datasets/great/master/train/train__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
50
+ for x in range(300)
51
+ ]
52
+ _TEST_URLS = [
53
+ f"https://raw.githubusercontent.com/google-research-datasets/great/master/eval/eval__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
54
+ for x in range(300)
55
+ ]
56
+ _VALID_URLS = [
57
+ f"https://raw.githubusercontent.com/google-research-datasets/great/master/dev/dev__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
58
+ for x in range(300)
59
+ ]
60
+
61
+
62
+ class GreatCode(datasets.GeneratorBasedBuilder):
63
+ VERSION = datasets.Version("1.0.0")
64
+
65
+ def _info(self):
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=datasets.Features(
69
+ {
70
+ "id": datasets.Value("int32"),
71
+ "source_tokens": datasets.Sequence(datasets.Value("string")),
72
+ "has_bug": datasets.Value("bool"),
73
+ "error_location": datasets.Value("int32"),
74
+ "repair_candidates": datasets.Sequence(datasets.Value("string")),
75
+ "bug_kind": datasets.Value("int32"),
76
+ "bug_kind_name": datasets.Value("string"),
77
+ "repair_targets": datasets.Sequence(datasets.Value("int32")),
78
+ "edges": [
79
+ [
80
+ {
81
+ "before_index": datasets.Value("int32"),
82
+ "after_index": datasets.Value("int32"),
83
+ "edge_type": datasets.Value("int32"),
84
+ "edge_type_name": datasets.Value("string"),
85
+ }
86
+ ]
87
+ ],
88
+ "provenances": [
89
+ {
90
+ "datasetProvenance": {
91
+ "datasetName": datasets.Value("string"),
92
+ "filepath": datasets.Value("string"),
93
+ "license": datasets.Value("string"),
94
+ "note": datasets.Value("string"),
95
+ }
96
+ }
97
+ ],
98
+ },
99
+ ),
100
+ supervised_keys=None,
101
+ homepage=_HOMEPAGE_URL,
102
+ citation=_CITATION,
103
+ )
104
+
105
+ def _split_generators(self, dl_manager):
106
+ train_path = dl_manager.download_and_extract(_TRAIN_URLS)
107
+ valid_path = dl_manager.download_and_extract(_VALID_URLS)
108
+ test_path = dl_manager.download_and_extract(_TEST_URLS)
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.TRAIN,
112
+ gen_kwargs={
113
+ "datapath": train_path,
114
+ "datatype": "train",
115
+ },
116
+ ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.VALIDATION,
119
+ gen_kwargs={
120
+ "datapath": valid_path,
121
+ "datatype": "valid",
122
+ },
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TEST,
126
+ gen_kwargs={
127
+ "datapath": test_path,
128
+ "datatype": "test",
129
+ },
130
+ ),
131
+ ]
132
+
133
+ def _generate_examples(self, datapath, datatype):
134
+ for dp in datapath:
135
+ with open(dp, "r", encoding="utf-8") as json_file:
136
+ json_list = list(json_file)
137
+
138
+ for example_counter, json_str in enumerate(json_list):
139
+ result = json.loads(json_str)
140
+ response = {
141
+ "id": example_counter,
142
+ "source_tokens": result["source_tokens"],
143
+ "has_bug": result["has_bug"],
144
+ "error_location": result["error_location"],
145
+ "repair_candidates": [str(x) for x in result["repair_candidates"]],
146
+ "bug_kind": result["bug_kind"],
147
+ "bug_kind_name": result["bug_kind_name"],
148
+ "repair_targets": result["repair_targets"],
149
+ "edges": [
150
+ [
151
+ {
152
+ "before_index": result["edges"][x][0],
153
+ "after_index": result["edges"][x][1],
154
+ "edge_type": result["edges"][x][2],
155
+ "edge_type_name": result["edges"][x][3],
156
+ }
157
+ ]
158
+ for x in range(len(result["edges"]))
159
+ ],
160
+ "provenances": result["provenances"],
161
+ }
162
+ yield example_counter, response