Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
License:
parquet-converter commited on
Commit
104079b
1 Parent(s): 67caa22

Update parquet files

Browse files
README.md DELETED
@@ -1,200 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-sa-3.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - table-to-text
18
- task_ids: []
19
- paperswithcode_id: null
20
- pretty_name: GREAT
21
- dataset_info:
22
- features:
23
- - name: id
24
- dtype: int32
25
- - name: source_tokens
26
- sequence: string
27
- - name: has_bug
28
- dtype: bool
29
- - name: error_location
30
- dtype: int32
31
- - name: repair_candidates
32
- sequence: string
33
- - name: bug_kind
34
- dtype: int32
35
- - name: bug_kind_name
36
- dtype: string
37
- - name: repair_targets
38
- sequence: int32
39
- - name: edges
40
- list:
41
- list:
42
- - name: before_index
43
- dtype: int32
44
- - name: after_index
45
- dtype: int32
46
- - name: edge_type
47
- dtype: int32
48
- - name: edge_type_name
49
- dtype: string
50
- - name: provenances
51
- list:
52
- - name: datasetProvenance
53
- struct:
54
- - name: datasetName
55
- dtype: string
56
- - name: filepath
57
- dtype: string
58
- - name: license
59
- dtype: string
60
- - name: note
61
- dtype: string
62
- splits:
63
- - name: train
64
- num_bytes: 14705534822
65
- num_examples: 1798742
66
- - name: validation
67
- num_bytes: 1502956919
68
- num_examples: 185656
69
- - name: test
70
- num_bytes: 7880762248
71
- num_examples: 968592
72
- download_size: 23310374002
73
- dataset_size: 24089253989
74
- ---
75
-
76
- # Dataset Card for GREAT
77
-
78
- ## Table of Contents
79
- - [Dataset Description](#dataset-description)
80
- - [Dataset Summary](#dataset-summary)
81
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
82
- - [Languages](#languages)
83
- - [Dataset Structure](#dataset-structure)
84
- - [Data Instances](#data-instances)
85
- - [Data Fields](#data-fields)
86
- - [Data Splits](#data-splits)
87
- - [Dataset Creation](#dataset-creation)
88
- - [Curation Rationale](#curation-rationale)
89
- - [Source Data](#source-data)
90
- - [Annotations](#annotations)
91
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
92
- - [Considerations for Using the Data](#considerations-for-using-the-data)
93
- - [Social Impact of Dataset](#social-impact-of-dataset)
94
- - [Discussion of Biases](#discussion-of-biases)
95
- - [Other Known Limitations](#other-known-limitations)
96
- - [Additional Information](#additional-information)
97
- - [Dataset Curators](#dataset-curators)
98
- - [Licensing Information](#licensing-information)
99
- - [Citation Information](#citation-information)
100
- - [Contributions](#contributions)
101
-
102
- ## Dataset Description
103
-
104
- - **Homepage:** None
105
- - **Repository:** https://github.com/google-research-datasets/great
106
- - **Paper:** https://openreview.net/forum?id=B1lnbRNtwr
107
- - **Leaderboard:** [More Information Needed]
108
- - **Point of Contact:** [More Information Needed]
109
-
110
- ### Dataset Summary
111
-
112
- [More Information Needed]
113
-
114
- ### Supported Tasks and Leaderboards
115
-
116
- [More Information Needed]
117
-
118
- ### Languages
119
-
120
- [More Information Needed]
121
-
122
- ## Dataset Structure
123
-
124
- ### Data Instances
125
-
126
- Here are some examples of questions and facts:
127
-
128
-
129
- ### Data Fields
130
-
131
- [More Information Needed]
132
-
133
- ### Data Splits
134
-
135
- [More Information Needed]
136
-
137
- ## Dataset Creation
138
-
139
- ### Curation Rationale
140
-
141
- [More Information Needed]
142
-
143
- ### Source Data
144
-
145
- [More Information Needed]
146
-
147
- #### Initial Data Collection and Normalization
148
-
149
- [More Information Needed]
150
-
151
- #### Who are the source language producers?
152
-
153
- [More Information Needed]
154
-
155
- ### Annotations
156
-
157
- [More Information Needed]
158
-
159
- #### Annotation process
160
-
161
- [More Information Needed]
162
-
163
- #### Who are the annotators?
164
-
165
- [More Information Needed]
166
-
167
- ### Personal and Sensitive Information
168
-
169
- [More Information Needed]
170
-
171
- ## Considerations for Using the Data
172
-
173
- ### Social Impact of Dataset
174
-
175
- [More Information Needed]
176
-
177
- ### Discussion of Biases
178
-
179
- [More Information Needed]
180
-
181
- ### Other Known Limitations
182
-
183
- [More Information Needed]
184
-
185
- ## Additional Information
186
-
187
- ### Dataset Curators
188
-
189
- [More Information Needed]
190
-
191
- ### Licensing Information
192
-
193
- [More Information Needed]
194
-
195
- ### Citation Information
196
-
197
- [More Information Needed]
198
- ### Contributions
199
-
200
- Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
The diff for this file is too large to render. See raw diff
 
default/partial-test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ada7f44b63a03119c8cb68fe779cc4692cb82dbe3eaa80557f1eda427fdfe60d
3
+ size 61899707
default/partial-test/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdd8c7ff57b4912d46b7b12e72bc5265874ac228303ee82fe639296ca03a5334
3
+ size 60777719
default/partial-test/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4345427e025e93e480f020cda02775337e71fab35419bb5796eab3fa042f7a40
3
+ size 61122705
default/partial-test/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b47ae66505c1a723571ae2279925162bbde6df79c2b71193bf1b1be12f552fa
3
+ size 60849466
default/partial-test/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9525a81c5085c2b32a230221f45d8c0bf20a1d1d65499fe3a4dc1be5441bfa8
3
+ size 61580996
default/partial-test/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca19d2ae882f1e485b588543d6997660b9d2a1754045f9b05828f1d83bd2d5eb
3
+ size 61381992
default/partial-test/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ee719c850bacd0a957057c08154711b4490b71e99800f83e0323b93a364ac1
3
+ size 61437065
default/partial-test/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c120b7b23903ec6c2b5deb17cdf5513b0b2a9fe8fcd470e8b4f299b76087efe0
3
+ size 60981311
default/partial-test/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc431cd41dddda16a0696f35706834fd2a903321946b0727bb3a3f10c2a969c7
3
+ size 61107868
default/partial-test/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f62bea835e6fa5513f5fbf1f11a890d91ec8eb11fe90460a30e5a0f533f035bb
3
+ size 55741533
default/partial-train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75e28b58a31f3c747d5d85ded8b10b680b98f8f5aea080e87fe4a797995fe673
3
+ size 61126173
default/partial-train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81799feae213ceaafb0d0de69342b71f1133ef7a4280fd4e474628e204290e39
3
+ size 61171162
default/partial-train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6018de6c07622e6b467b96ef3c3dd3e39f403d69cba1712924f1c9efe3b67759
3
+ size 61458497
default/partial-train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ecc9652fc1215c902d7c1d4eacf3359d202343ee8f7c740f4cf74a86d3c14eb
3
+ size 61578780
default/partial-train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c016e77af27fdb97b61d44ec2076c514d09f88694cc08b06164d51cd42e02d19
3
+ size 61630869
default/partial-train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bee6a1364dbe4d06402de9ef2824a99ca5e47c90c2c8a939feca2c70527fecd1
3
+ size 61785080
default/partial-train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:458ceacb21368b10593c1843b637270b735f967bf00e5865fe66775cf076ebde
3
+ size 61782934
default/partial-train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98156fd0a32a25d6b4a9912d9cdae86e2c6e97f696ae6f29d75436b29bf57b0d
3
+ size 61305979
default/partial-train/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:764a22afb2d1edff530efc13e482b9903bc0b8ce5aeb2d6ea14e9cd634adfa0c
3
+ size 61594384
default/partial-train/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:526c67af19039f569651772a3e221452dcf3189320dd0f201bf284c5a4cff3e1
3
+ size 56688452
default/partial-validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd7482fb02e4f35fa3bef468798ece7defd81c1f78140fd3606f1fb205c85d70
3
+ size 60534637
default/partial-validation/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:692e50d5b7174b57957aec0ec7f7a5fd1f6a7c387ea9a164116ffd4fa8c494ee
3
+ size 60753856
default/partial-validation/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5b1a451d5eca8385443b58d0e09ffb5c73f051decb22a78e2db979f0577860e
3
+ size 59792193
great_code.py DELETED
@@ -1,160 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- import json
18
-
19
- import datasets
20
-
21
-
22
- _DESCRIPTION = """\
23
- The dataset for the variable-misuse task, described in the ICLR 2020 paper 'Global Relational Models of Source Code' [https://openreview.net/forum?id=B1lnbRNtwr]
24
-
25
- This is the public version of the dataset used in that paper. The original, used to produce the graphs in the paper, could not be open-sourced due to licensing issues. See the public associated code repository [https://github.com/VHellendoorn/ICLR20-Great] for results produced from this dataset.
26
-
27
- This dataset was generated synthetically from the corpus of Python code in the ETH Py150 Open dataset [https://github.com/google-research-datasets/eth_py150_open].
28
- """
29
- _HOMEPAGE_URL = ""
30
- _CITATION = """\
31
- @inproceedings{DBLP:conf/iclr/HellendoornSSMB20,
32
- author = {Vincent J. Hellendoorn and
33
- Charles Sutton and
34
- Rishabh Singh and
35
- Petros Maniatis and
36
- David Bieber},
37
- title = {Global Relational Models of Source Code},
38
- booktitle = {8th International Conference on Learning Representations, {ICLR} 2020,
39
- Addis Ababa, Ethiopia, April 26-30, 2020},
40
- publisher = {OpenReview.net},
41
- year = {2020},
42
- url = {https://openreview.net/forum?id=B1lnbRNtwr},
43
- timestamp = {Thu, 07 May 2020 17:11:47 +0200},
44
- biburl = {https://dblp.org/rec/conf/iclr/HellendoornSSMB20.bib},
45
- bibsource = {dblp computer science bibliography, https://dblp.org}
46
- }
47
- """
48
- _TRAIN_URLS = [
49
- f"https://raw.githubusercontent.com/google-research-datasets/great/master/train/train__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
50
- for x in range(300)
51
- ]
52
- _TEST_URLS = [
53
- f"https://raw.githubusercontent.com/google-research-datasets/great/master/eval/eval__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
54
- for x in range(300)
55
- ]
56
- _VALID_URLS = [
57
- f"https://raw.githubusercontent.com/google-research-datasets/great/master/dev/dev__VARIABLE_MISUSE__SStuB.txt-{x:05d}-of-00300"
58
- for x in range(300)
59
- ]
60
-
61
-
62
- class GreatCode(datasets.GeneratorBasedBuilder):
63
- VERSION = datasets.Version("1.0.0")
64
-
65
- def _info(self):
66
- return datasets.DatasetInfo(
67
- description=_DESCRIPTION,
68
- features=datasets.Features(
69
- {
70
- "id": datasets.Value("int32"),
71
- "source_tokens": datasets.Sequence(datasets.Value("string")),
72
- "has_bug": datasets.Value("bool"),
73
- "error_location": datasets.Value("int32"),
74
- "repair_candidates": datasets.Sequence(datasets.Value("string")),
75
- "bug_kind": datasets.Value("int32"),
76
- "bug_kind_name": datasets.Value("string"),
77
- "repair_targets": datasets.Sequence(datasets.Value("int32")),
78
- "edges": [
79
- [
80
- {
81
- "before_index": datasets.Value("int32"),
82
- "after_index": datasets.Value("int32"),
83
- "edge_type": datasets.Value("int32"),
84
- "edge_type_name": datasets.Value("string"),
85
- }
86
- ]
87
- ],
88
- "provenances": [
89
- {
90
- "datasetProvenance": {
91
- "datasetName": datasets.Value("string"),
92
- "filepath": datasets.Value("string"),
93
- "license": datasets.Value("string"),
94
- "note": datasets.Value("string"),
95
- }
96
- }
97
- ],
98
- },
99
- ),
100
- supervised_keys=None,
101
- homepage=_HOMEPAGE_URL,
102
- citation=_CITATION,
103
- )
104
-
105
- def _split_generators(self, dl_manager):
106
- train_path = dl_manager.download_and_extract(_TRAIN_URLS)
107
- valid_path = dl_manager.download_and_extract(_VALID_URLS)
108
- test_path = dl_manager.download_and_extract(_TEST_URLS)
109
- return [
110
- datasets.SplitGenerator(
111
- name=datasets.Split.TRAIN,
112
- gen_kwargs={
113
- "datapath": train_path,
114
- "datatype": "train",
115
- },
116
- ),
117
- datasets.SplitGenerator(
118
- name=datasets.Split.VALIDATION,
119
- gen_kwargs={
120
- "datapath": valid_path,
121
- "datatype": "valid",
122
- },
123
- ),
124
- datasets.SplitGenerator(
125
- name=datasets.Split.TEST,
126
- gen_kwargs={
127
- "datapath": test_path,
128
- "datatype": "test",
129
- },
130
- ),
131
- ]
132
-
133
- def _generate_examples(self, datapath, datatype):
134
- for file_idx, dp in enumerate(datapath):
135
- with open(dp, "r", encoding="utf-8") as json_file:
136
- for example_counter, json_str in enumerate(json_file):
137
- result = json.loads(json_str)
138
- response = {
139
- "id": example_counter,
140
- "source_tokens": result["source_tokens"],
141
- "has_bug": result["has_bug"],
142
- "error_location": result["error_location"],
143
- "repair_candidates": [str(x) for x in result["repair_candidates"]],
144
- "bug_kind": result["bug_kind"],
145
- "bug_kind_name": result["bug_kind_name"],
146
- "repair_targets": result["repair_targets"],
147
- "edges": [
148
- [
149
- {
150
- "before_index": result["edges"][x][0],
151
- "after_index": result["edges"][x][1],
152
- "edge_type": result["edges"][x][2],
153
- "edge_type_name": result["edges"][x][3],
154
- }
155
- ]
156
- for x in range(len(result["edges"]))
157
- ],
158
- "provenances": result["provenances"],
159
- }
160
- yield f"{file_idx}_{example_counter}", response