system HF staff commited on
Commit
1ac0205
0 Parent(s):

Update files from the datasets library (from 1.8.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.8.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - code
8
+ licenses:
9
+ - other-C-UDA
10
+ multilinguality:
11
+ - other-programming-languages
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - machine-translation
20
+ ---
21
+ # Dataset Card for "code_x_glue_cc_code_to_code_trans"
22
+
23
+ ## Table of Contents
24
+ - [Dataset Description](#dataset-description)
25
+ - [Dataset Summary](#dataset-summary)
26
+ - [Supported Tasks and Leaderboards](#supported-tasks)
27
+ - [Languages](#languages)
28
+ - [Dataset Structure](#dataset-structure)
29
+ - [Data Instances](#data-instances)
30
+ - [Data Fields](#data-fields)
31
+ - [Data Splits](#data-splits-sample-size)
32
+ - [Dataset Creation](#dataset-creation)
33
+ - [Curation Rationale](#curation-rationale)
34
+ - [Source Data](#source-data)
35
+ - [Annotations](#annotations)
36
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
37
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
38
+ - [Social Impact of Dataset](#social-impact-of-dataset)
39
+ - [Discussion of Biases](#discussion-of-biases)
40
+ - [Other Known Limitations](#other-known-limitations)
41
+ - [Additional Information](#additional-information)
42
+ - [Dataset Curators](#dataset-curators)
43
+ - [Licensing Information](#licensing-information)
44
+ - [Citation Information](#citation-information)
45
+ - [Contributions](#contributions)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans
50
+
51
+ ### Dataset Summary
52
+
53
+ CodeXGLUE code-to-code-trans dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans
54
+
55
+ The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/).
56
+ We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ - `machine-translation`: The dataset can be used to train a model for translating code in Java to C# and vice versa.
61
+
62
+ ### Languages
63
+
64
+ - Java **programming** language
65
+ - C# **programming** language
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ An example of 'validation' looks as follows.
72
+ ```
73
+ {
74
+ "cs": "public DVRecord(RecordInputStream in1){_option_flags = in1.ReadInt();_promptTitle = ReadUnicodeString(in1);_errorTitle = ReadUnicodeString(in1);_promptText = ReadUnicodeString(in1);_errorText = ReadUnicodeString(in1);int field_size_first_formula = in1.ReadUShort();_not_used_1 = in1.ReadShort();_formula1 = NPOI.SS.Formula.Formula.Read(field_size_first_formula, in1);int field_size_sec_formula = in1.ReadUShort();_not_used_2 = in1.ReadShort();_formula2 = NPOI.SS.Formula.Formula.Read(field_size_sec_formula, in1);_regions = new CellRangeAddressList(in1);}\n",
75
+ "id": 0,
76
+ "java": "public DVRecord(RecordInputStream in) {_option_flags = in.readInt();_promptTitle = readUnicodeString(in);_errorTitle = readUnicodeString(in);_promptText = readUnicodeString(in);_errorText = readUnicodeString(in);int field_size_first_formula = in.readUShort();_not_used_1 = in.readShort();_formula1 = Formula.read(field_size_first_formula, in);int field_size_sec_formula = in.readUShort();_not_used_2 = in.readShort();_formula2 = Formula.read(field_size_sec_formula, in);_regions = new CellRangeAddressList(in);}\n"
77
+ }
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ In the following each data field in go is explained for each config. The data fields are the same among all splits.
83
+
84
+ #### default
85
+
86
+ |field name| type | description |
87
+ |----------|------|-----------------------------|
88
+ |id |int32 | Index of the sample |
89
+ |java |string| The java version of the code|
90
+ |cs |string| The C# version of the code |
91
+
92
+ ### Data Splits
93
+
94
+ | name |train|validation|test|
95
+ |-------|----:|---------:|---:|
96
+ |default|10300| 500|1000|
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ [More Information Needed]
103
+
104
+ ### Source Data
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ [More Information Needed]
109
+
110
+ #### Who are the source language producers?
111
+
112
+ [More Information Needed]
113
+
114
+ ### Annotations
115
+
116
+ #### Annotation process
117
+
118
+ [More Information Needed]
119
+
120
+ #### Who are the annotators?
121
+
122
+ [More Information Needed]
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ [More Information Needed]
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ [More Information Needed]
133
+
134
+ ### Discussion of Biases
135
+
136
+ [More Information Needed]
137
+
138
+ ### Other Known Limitations
139
+
140
+ [More Information Needed]
141
+
142
+ ## Additional Information
143
+
144
+ ### Dataset Curators
145
+
146
+ https://github.com/microsoft, https://github.com/madlag
147
+
148
+ ### Licensing Information
149
+
150
+ Computational Use of Data Agreement (C-UDA) License.
151
+
152
+ ### Citation Information
153
+
154
+ ```
155
+ @article{CodeXGLUE,
156
+ title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
157
+ year={2020},}
158
+ ```
159
+
160
+ ### Contributions
161
+
162
+ Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_to_code_trans.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+ import datasets
4
+
5
+ from .common import TrainValidTestChild
6
+ from .generated_definitions import DEFINITIONS
7
+
8
+
9
+ _DESCRIPTION = """The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/).
10
+ We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets."""
11
+ _CITATION = """@article{DBLP:journals/corr/abs-2102-04664,
12
+ author = {Shuai Lu and
13
+ Daya Guo and
14
+ Shuo Ren and
15
+ Junjie Huang and
16
+ Alexey Svyatkovskiy and
17
+ Ambrosio Blanco and
18
+ Colin B. Clement and
19
+ Dawn Drain and
20
+ Daxin Jiang and
21
+ Duyu Tang and
22
+ Ge Li and
23
+ Lidong Zhou and
24
+ Linjun Shou and
25
+ Long Zhou and
26
+ Michele Tufano and
27
+ Ming Gong and
28
+ Ming Zhou and
29
+ Nan Duan and
30
+ Neel Sundaresan and
31
+ Shao Kun Deng and
32
+ Shengyu Fu and
33
+ Shujie Liu},
34
+ title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
35
+ and Generation},
36
+ journal = {CoRR},
37
+ volume = {abs/2102.04664},
38
+ year = {2021}
39
+ }"""
40
+
41
+
42
+ class CodeXGlueCcCodeToCodeTransImpl(TrainValidTestChild):
43
+ _DESCRIPTION = _DESCRIPTION
44
+ _CITATION = _CITATION
45
+
46
+ _FEATURES = {
47
+ "id": datasets.Value("int32"), # Index of the sample
48
+ "java": datasets.Value("string"), # The java version of the code
49
+ "cs": datasets.Value("string"), # The C# version of the code
50
+ }
51
+
52
+ def generate_urls(self, split_name):
53
+ for key in "cs", "java":
54
+ yield key, f"{split_name}.java-cs.txt.{key}"
55
+
56
+ def _generate_examples(self, split_name, file_paths):
57
+ """This function returns the examples in the raw (text) form."""
58
+ # Open each file (one for java, and one for c#)
59
+ files = {k: open(file_paths[k], encoding="utf-8") for k in file_paths}
60
+
61
+ id_ = 0
62
+ while True:
63
+ # Read a single line from each file
64
+ entries = {k: files[k].readline() for k in file_paths}
65
+
66
+ empty = self.check_empty(entries)
67
+ if empty:
68
+ # We are done: end of files
69
+ return
70
+
71
+ entries["id"] = id_
72
+ yield id_, entries
73
+ id_ += 1
74
+
75
+
76
+ CLASS_MAPPING = {
77
+ "CodeXGlueCcCodeToCodeTrans": CodeXGlueCcCodeToCodeTransImpl,
78
+ }
79
+
80
+
81
+ class CodeXGlueCcCodeToCodeTrans(datasets.GeneratorBasedBuilder):
82
+ BUILDER_CONFIG_CLASS = datasets.BuilderConfig
83
+ BUILDER_CONFIGS = [
84
+ datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
85
+ ]
86
+
87
+ def _info(self):
88
+ name = self.config.name
89
+ info = DEFINITIONS[name]
90
+ if info["class_name"] in CLASS_MAPPING:
91
+ self.child = CLASS_MAPPING[info["class_name"]](info)
92
+ else:
93
+ raise RuntimeError(f"Unknown python class for dataset configuration {name}")
94
+ ret = self.child._info()
95
+ return ret
96
+
97
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
98
+ return self.child._split_generators(dl_manager=dl_manager)
99
+
100
+ def _generate_examples(self, split_name, file_paths):
101
+ return self.child._generate_examples(split_name, file_paths)
common.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+ import datasets
4
+
5
+
6
+ # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
+ _DEFAULT_CITATION = """@article{CodeXGLUE,
8
+ title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
+ year={2020},}"""
10
+
11
+
12
+ class Child:
13
+ _DESCRIPTION = None
14
+ _FEATURES = None
15
+ _CITATION = None
16
+ SPLITS = {"train": datasets.Split.TRAIN}
17
+ _SUPERVISED_KEYS = None
18
+
19
+ def __init__(self, info):
20
+ self.info = info
21
+
22
+ def homepage(self):
23
+ return self.info["project_url"]
24
+
25
+ def _info(self):
26
+ # This is the description that will appear on the datasets page.
27
+ return datasets.DatasetInfo(
28
+ description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
+ features=datasets.Features(self._FEATURES),
30
+ homepage=self.homepage(),
31
+ citation=self._CITATION or _DEFAULT_CITATION,
32
+ supervised_keys=self._SUPERVISED_KEYS,
33
+ )
34
+
35
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
+ SPLITS = self.SPLITS
37
+ _URL = self.info["raw_url"]
38
+ urls_to_download = {}
39
+ for split in SPLITS:
40
+ if split not in urls_to_download:
41
+ urls_to_download[split] = {}
42
+
43
+ for key, url in self.generate_urls(split):
44
+ if not url.startswith("http"):
45
+ url = _URL + "/" + url
46
+ urls_to_download[split][key] = url
47
+
48
+ downloaded_files = {}
49
+ for k, v in urls_to_download.items():
50
+ downloaded_files[k] = dl_manager.download_and_extract(v)
51
+
52
+ return [
53
+ datasets.SplitGenerator(
54
+ name=SPLITS[k],
55
+ gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
+ )
57
+ for k in SPLITS
58
+ ]
59
+
60
+ def check_empty(self, entries):
61
+ all_empty = all([v == "" for v in entries.values()])
62
+ all_non_empty = all([v != "" for v in entries.values()])
63
+
64
+ if not all_non_empty and not all_empty:
65
+ raise RuntimeError("Parallel data files should have the same number of lines.")
66
+
67
+ return all_empty
68
+
69
+
70
+ class TrainValidTestChild(Child):
71
+ SPLITS = {
72
+ "train": datasets.Split.TRAIN,
73
+ "valid": datasets.Split.VALIDATION,
74
+ "test": datasets.Split.TEST,
75
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "CodeXGLUE code-to-code-trans dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans\n\nThe dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/).\n We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.", "citation": "@article{DBLP:journals/corr/abs-2102-04664,\n author = {Shuai Lu and\n Daya Guo and\n Shuo Ren and\n Junjie Huang and\n Alexey Svyatkovskiy and\n Ambrosio Blanco and\n Colin B. Clement and\n Dawn Drain and\n Daxin Jiang and\n Duyu Tang and\n Ge Li and\n Lidong Zhou and\n Linjun Shou and\n Long Zhou and\n Michele Tufano and\n Ming Gong and\n Ming Zhou and\n Nan Duan and\n Neel Sundaresan and\n Shao Kun Deng and\n Shengyu Fu and\n Shujie Liu},\n title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding\n and Generation},\n journal = {CoRR},\n volume = {abs/2102.04664},\n year = {2021}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-to-code-trans", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "java": {"dtype": "string", "id": null, "_type": "Value"}, "cs": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "code_x_glue_cc_code_to_code_trans", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4372657, "num_examples": 10300, "dataset_name": "code_x_glue_cc_code_to_code_trans"}, "validation": {"name": "validation", "num_bytes": 226415, "num_examples": 500, "dataset_name": "code_x_glue_cc_code_to_code_trans"}, "test": {"name": "test", "num_bytes": 418595, "num_examples": 1000, "dataset_name": "code_x_glue_cc_code_to_code_trans"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/train.java-cs.txt.cs": {"num_bytes": 2387613, "checksum": "8f9e154e38b17cf19840a44c50a00b6fa16397336c302e3cf514b29ddfafa0e9"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/train.java-cs.txt.java": {"num_bytes": 1861428, "checksum": "3d2ba1a8f5de30688663ce76bf9b061574d330fc54eb08c4b7eccda74f42be67"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/valid.java-cs.txt.cs": {"num_bytes": 124022, "checksum": "687c61db799e9e3369a0822184ba67bb5b007c48025f25d44084cc6f525ce4ea"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/valid.java-cs.txt.java": {"num_bytes": 96385, "checksum": "aed88f2a31af5b6367100bfbca6d9c4888fa63685502b21db817d8b0f0ad5272"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/test.java-cs.txt.cs": {"num_bytes": 229147, "checksum": "4137527f96c898372e368c75deb3ec8c17c1187ac5a1ae641da1df65e143cd2d"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data/test.java-cs.txt.java": {"num_bytes": 177440, "checksum": "cad0fb08ae59443baeeb1f58de3af83786358dac8ce3a81fd026708ca1b9b2ee"}}, "download_size": 4876035, "post_processing_size": null, "dataset_size": 5017667, "size_in_bytes": 9893702}}
dummy/default/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e127b28cef124f2df8aa8253b050a3fe70799a47283593c59c18f145dae5f33
3
+ size 2596
generated_definitions.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DEFINITIONS = {
2
+ "default": {
3
+ "class_name": "CodeXGlueCcCodeToCodeTrans",
4
+ "dataset_type": "Code-Code",
5
+ "description": "CodeXGLUE code-to-code-trans dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans",
6
+ "dir_name": "code-to-code-trans",
7
+ "name": "default",
8
+ "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-to-code-trans",
9
+ "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-to-code-trans/data",
10
+ "sizes": {"test": 1000, "train": 10300, "validation": 500},
11
+ }
12
+ }