Datasets:

Multilinguality:
multilingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
ArXiv:
License:
albertvillanova HF staff commited on
Commit
ae1a41e
1 Parent(s): 5c28537

Convert dataset to Parquet (#4)

Browse files

- Convert dataset to Parquet (712090f03553f75a54d5386ef14d6d7adf4942b0)
- Add lv_en data files (7ab93ab0181ba2dae6373c41e6ab34fda21c8c79)
- Add no_en data files (ef78499927f74772356ebd3ee0b458ea98df57e3)
- Add zh_en data files (987b1d2e23b1ebded170b0552bbfdd378d3e9c9c)
- Delete loading script (05101d34bce800d4a1bd3108b9c4538830e42b3f)
- Delete loading script auxiliary file (7952df52903e0f786eff786b2c50924272eaef0e)
- Delete loading script auxiliary file (9133a08fb4fcdd34b13fabad6ac617c1738f767c)

README.md CHANGED
@@ -34,16 +34,16 @@ dataset_info:
34
  dtype: string
35
  splits:
36
  - name: train
37
- num_bytes: 8163215
38
  num_examples: 42701
39
  - name: validation
40
- num_bytes: 190340
41
  num_examples: 1000
42
  - name: test
43
- num_bytes: 190780
44
  num_examples: 1000
45
- download_size: 8007867
46
- dataset_size: 8544335
47
  - config_name: lv_en
48
  features:
49
  - name: id
@@ -54,16 +54,16 @@ dataset_info:
54
  dtype: string
55
  splits:
56
  - name: train
57
- num_bytes: 3644127
58
  num_examples: 18749
59
  - name: validation
60
- num_bytes: 192519
61
  num_examples: 1000
62
  - name: test
63
- num_bytes: 190875
64
  num_examples: 1000
65
- download_size: 3778501
66
- dataset_size: 4027521
67
  - config_name: no_en
68
  features:
69
  - name: id
@@ -74,16 +74,16 @@ dataset_info:
74
  dtype: string
75
  splits:
76
  - name: train
77
- num_bytes: 8761795
78
  num_examples: 44322
79
  - name: validation
80
- num_bytes: 203823
81
  num_examples: 1000
82
  - name: test
83
- num_bytes: 197135
84
  num_examples: 1000
85
- download_size: 8606833
86
- dataset_size: 9162753
87
  - config_name: zh_en
88
  features:
89
  - name: id
@@ -94,16 +94,49 @@ dataset_info:
94
  dtype: string
95
  splits:
96
  - name: train
97
- num_bytes: 9592196
98
  num_examples: 50154
99
  - name: validation
100
- num_bytes: 192155
101
  num_examples: 1000
102
  - name: test
103
- num_bytes: 195245
104
  num_examples: 1000
105
- download_size: 9353684
106
- dataset_size: 9979596
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  ---
108
  # Dataset Card for "code_x_glue_tt_text_to_text"
109
 
 
34
  dtype: string
35
  splits:
36
  - name: train
37
+ num_bytes: 8163175
38
  num_examples: 42701
39
  - name: validation
40
+ num_bytes: 190332
41
  num_examples: 1000
42
  - name: test
43
+ num_bytes: 190772
44
  num_examples: 1000
45
+ download_size: 4322666
46
+ dataset_size: 8544279
47
  - config_name: lv_en
48
  features:
49
  - name: id
 
54
  dtype: string
55
  splits:
56
  - name: train
57
+ num_bytes: 3644111
58
  num_examples: 18749
59
  - name: validation
60
+ num_bytes: 192511
61
  num_examples: 1000
62
  - name: test
63
+ num_bytes: 190867
64
  num_examples: 1000
65
+ download_size: 1997959
66
+ dataset_size: 4027489
67
  - config_name: no_en
68
  features:
69
  - name: id
 
74
  dtype: string
75
  splits:
76
  - name: train
77
+ num_bytes: 8761755
78
  num_examples: 44322
79
  - name: validation
80
+ num_bytes: 203815
81
  num_examples: 1000
82
  - name: test
83
+ num_bytes: 197127
84
  num_examples: 1000
85
+ download_size: 4661188
86
+ dataset_size: 9162697
87
  - config_name: zh_en
88
  features:
89
  - name: id
 
94
  dtype: string
95
  splits:
96
  - name: train
97
+ num_bytes: 9592148
98
  num_examples: 50154
99
  - name: validation
100
+ num_bytes: 192147
101
  num_examples: 1000
102
  - name: test
103
+ num_bytes: 195237
104
  num_examples: 1000
105
+ download_size: 4733144
106
+ dataset_size: 9979532
107
+ configs:
108
+ - config_name: da_en
109
+ data_files:
110
+ - split: train
111
+ path: da_en/train-*
112
+ - split: validation
113
+ path: da_en/validation-*
114
+ - split: test
115
+ path: da_en/test-*
116
+ - config_name: lv_en
117
+ data_files:
118
+ - split: train
119
+ path: lv_en/train-*
120
+ - split: validation
121
+ path: lv_en/validation-*
122
+ - split: test
123
+ path: lv_en/test-*
124
+ - config_name: no_en
125
+ data_files:
126
+ - split: train
127
+ path: no_en/train-*
128
+ - split: validation
129
+ path: no_en/validation-*
130
+ - split: test
131
+ path: no_en/test-*
132
+ - config_name: zh_en
133
+ data_files:
134
+ - split: train
135
+ path: zh_en/train-*
136
+ - split: validation
137
+ path: zh_en/validation-*
138
+ - split: test
139
+ path: zh_en/test-*
140
  ---
141
  # Dataset Card for "code_x_glue_tt_text_to_text"
142
 
code_x_glue_tt_text_to_text.py DELETED
@@ -1,106 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
- from .common import Child
6
- from .generated_definitions import DEFINITIONS
7
-
8
-
9
- _DESCRIPTION = """The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/."""
10
- _CITATION = """@article{DBLP:journals/corr/abs-2102-04664,
11
- author = {Shuai Lu and
12
- Daya Guo and
13
- Shuo Ren and
14
- Junjie Huang and
15
- Alexey Svyatkovskiy and
16
- Ambrosio Blanco and
17
- Colin B. Clement and
18
- Dawn Drain and
19
- Daxin Jiang and
20
- Duyu Tang and
21
- Ge Li and
22
- Lidong Zhou and
23
- Linjun Shou and
24
- Long Zhou and
25
- Michele Tufano and
26
- Ming Gong and
27
- Ming Zhou and
28
- Nan Duan and
29
- Neel Sundaresan and
30
- Shao Kun Deng and
31
- Shengyu Fu and
32
- Shujie Liu},
33
- title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
34
- and Generation},
35
- journal = {CoRR},
36
- volume = {abs/2102.04664},
37
- year = {2021}
38
- }"""
39
-
40
-
41
- class CodeXGlueTtTextToTextImpl(Child):
42
- _DESCRIPTION = _DESCRIPTION
43
- _CITATION = _CITATION
44
-
45
- _FEATURES = {
46
- "id": datasets.Value("int32"), # The index of the sample
47
- "source": datasets.Value("string"), # The source language version of the text
48
- "target": datasets.Value("string"), # The target language version of the text
49
- }
50
-
51
- _SUPERVISED_KEYS = ["target"]
52
-
53
- KEYS = ["source", "target"]
54
-
55
- SPLITS = {"train": datasets.Split.TRAIN, "dev": datasets.Split.VALIDATION, "test": datasets.Split.TEST}
56
-
57
- def generate_urls(self, split_name):
58
- lang_pair = self.info["parameters"]["natural_language_pair"]
59
- for i, lang in enumerate(lang_pair.split("-")):
60
- yield self.KEYS[i], f"{split_name}/{lang_pair}.{split_name}.{lang}"
61
-
62
- def _generate_examples(self, split_name, file_paths):
63
- # Open each file (one for source language and the other for target language)
64
- files = {k: open(file_paths[k], encoding="utf-8") for k in file_paths}
65
-
66
- id_ = 0
67
- while True:
68
- # Read a single line from each file
69
- entries = {k: files[k].readline() for k in file_paths}
70
-
71
- empty = self.check_empty(entries)
72
- if empty:
73
- # We are done: end of files
74
- return
75
-
76
- entries["id"] = id_
77
- yield id_, entries
78
- id_ += 1
79
-
80
-
81
- CLASS_MAPPING = {
82
- "CodeXGlueTtTextToText": CodeXGlueTtTextToTextImpl,
83
- }
84
-
85
-
86
- class CodeXGlueTtTextToText(datasets.GeneratorBasedBuilder):
87
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
88
- BUILDER_CONFIGS = [
89
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
90
- ]
91
-
92
- def _info(self):
93
- name = self.config.name
94
- info = DEFINITIONS[name]
95
- if info["class_name"] in CLASS_MAPPING:
96
- self.child = CLASS_MAPPING[info["class_name"]](info)
97
- else:
98
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
99
- ret = self.child._info()
100
- return ret
101
-
102
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
103
- return self.child._split_generators(dl_manager=dl_manager)
104
-
105
- def _generate_examples(self, split_name, file_paths):
106
- return self.child._generate_examples(split_name, file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da_en/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb6c2ae99827b62791401491ea867e1487697fa4e1d95820995c4df5298accfc
3
+ size 112308
da_en/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc54a8d428e06a6fb7503fefa733a76ae5e484dcd79a70426921e10e9bf2be62
3
+ size 4098744
da_en/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aab0d3599e0f5dc6c13a884294d96dce62bf8d6d98a88967b60463a9cabc0a57
3
+ size 111614
generated_definitions.py DELETED
@@ -1,46 +0,0 @@
1
- DEFINITIONS = {
2
- "da_en": {
3
- "class_name": "CodeXGlueTtTextToText",
4
- "dataset_type": "Text-Text",
5
- "description": "CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text",
6
- "dir_name": "text-to-text",
7
- "name": "da_en",
8
- "parameters": {"natural_language_pair": "da-en"},
9
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Text/text-to-text",
10
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Text/text-to-text/data",
11
- "sizes": {"test": 1000, "train": 42701, "validation": 1000},
12
- },
13
- "lv_en": {
14
- "class_name": "CodeXGlueTtTextToText",
15
- "dataset_type": "Text-Text",
16
- "description": "CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text",
17
- "dir_name": "text-to-text",
18
- "name": "lv_en",
19
- "parameters": {"natural_language_pair": "lv-en"},
20
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Text/text-to-text",
21
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Text/text-to-text/data",
22
- "sizes": {"test": 1000, "train": 18749, "validation": 1000},
23
- },
24
- "no_en": {
25
- "class_name": "CodeXGlueTtTextToText",
26
- "dataset_type": "Text-Text",
27
- "description": "CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text",
28
- "dir_name": "text-to-text",
29
- "name": "no_en",
30
- "parameters": {"natural_language_pair": "no-en"},
31
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Text/text-to-text",
32
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Text/text-to-text/data",
33
- "sizes": {"test": 1000, "train": 44322, "validation": 1000},
34
- },
35
- "zh_en": {
36
- "class_name": "CodeXGlueTtTextToText",
37
- "dataset_type": "Text-Text",
38
- "description": "CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text",
39
- "dir_name": "text-to-text",
40
- "name": "zh_en",
41
- "parameters": {"natural_language_pair": "zh-en"},
42
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Text-Text/text-to-text",
43
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Text-Text/text-to-text/data",
44
- "sizes": {"test": 1000, "train": 50154, "validation": 1000},
45
- },
46
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
lv_en/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:432a4352f06cec80fa5ecf0161748fc7c4b095d68b4024abf3585c8908d3551a
3
+ size 108334
lv_en/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e50c43fc92f9c9b0dd2edb264c4d6e56587ad251da5da8d8809b1d691aa3a75e
3
+ size 1778029
lv_en/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df6eed9eb31e22fd9bbec2b6b6b50c3d6f0cf2ab516a1d92ddc167e13430a42d
3
+ size 111596
no_en/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:053c50738f9de205755ab11272184333da4629c2e50d964c51eca9fac7e07626
3
+ size 115930
no_en/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dcb2927167473597845f7f6590a87bfeea239cd86fcbbce5431831ff36522a9
3
+ size 4426109
no_en/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb6f6753deddb36004668fb6fef1b41df7ba787871e0b1a8f41cc64bdc15963b
3
+ size 119149
zh_en/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d7a0c52b0f3f1d5eb9597fc54228395b916f1e88fbac11d6fb0d027815a1eb9
3
+ size 112113
zh_en/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ca272aa3b03ca4d831a4054c358f88b3ca439ac68bc45902a919d144a955788
3
+ size 4510884
zh_en/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ab9e7895ffd0670f871ea78e984bd140b86478bf922c8e89c695a19faf7173e
3
+ size 110147