system HF staff commited on
Commit
864f062
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +157 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. kelm.py +78 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n>1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - other
18
+ task_ids:
19
+ - other-other-data-to-text-generation
20
+ ---
21
+
22
+ # Dataset Card for Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://github.com/google-research-datasets/KELM-corpus
50
+ - **Repository:** https://github.com/google-research-datasets/KELM-corpus
51
+ - **Paper:** https://arxiv.org/abs/2010.12688
52
+ - **Leaderboard:**
53
+ - **Point of Contact:**
54
+
55
+ ### Dataset Summary
56
+
57
+ Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into
58
+ a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text.
59
+ The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ The intended task is data-to-text generation, taking in a knowledge graph tuple and generating a natural language
64
+ representation from it. Specifically, the data is in the format the authors used to train a seq2seq language model
65
+ with the tuples concatenated into a single sequence.
66
+
67
+ ### Languages
68
+
69
+ The dataset is in English.
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ Each instance consists of one KG triple paired with corresponding natural language.
76
+
77
+ ### Data Fields
78
+
79
+ - `triple`: Wikipedia triples of the form `<subject> <relation> <object>` where some subjects have multiple
80
+ relations, e.g. `<subject> <relation1> <object1> <relation2> <object2> <relation3> <object3>`. For more details on
81
+ how these relations are grouped, please refer to the paper.
82
+ - `sentence`: The corresponding Wikipedia sentence.
83
+
84
+ ### Data Splits
85
+
86
+ The dataset includes a pre-determined train, validation, and test split.
87
+
88
+ ## Dataset Creation
89
+
90
+ ### Curation Rationale
91
+
92
+ The goal of the dataset's curation and the associated modeling work discussed in the paper is to be able to generate
93
+ natural text from a knowledge graph.
94
+
95
+ ### Source Data
96
+
97
+ #### Initial Data Collection and Normalization
98
+
99
+ [More Information Needed]
100
+
101
+ #### Who are the source language producers?
102
+
103
+ The data is sourced from English Wikipedia and it's associated knowledge graph.
104
+
105
+ ### Annotations
106
+
107
+ #### Annotation process
108
+
109
+ [More Information Needed]
110
+
111
+ #### Who are the annotators?
112
+
113
+ [More Information Needed]
114
+
115
+ ### Personal and Sensitive Information
116
+
117
+ [More Information Needed]
118
+
119
+ ## Considerations for Using the Data
120
+
121
+ ### Social Impact of Dataset
122
+
123
+ [More Information Needed]
124
+
125
+ ### Discussion of Biases
126
+
127
+ From the paper:
128
+
129
+ > Wikipedia has documented ideological, gender6, and racial biases in its text. While the KELM corpus may still
130
+ contain some of these biases, certain types of biases may be reduced.
131
+
132
+ ### Other Known Limitations
133
+
134
+ [More Information Needed]
135
+
136
+ ## Additional Information
137
+
138
+ ### Dataset Curators
139
+
140
+ [More Information Needed]
141
+
142
+ ### Licensing Information
143
+
144
+ This dataset has been released under the [CC BY-SA 2.0 license](https://creativecommons.org/licenses/by-sa/2.0/).
145
+
146
+ ### Citation Information
147
+
148
+ ```
149
+ @misc{agarwal2020large,
150
+ title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training},
151
+ author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou},
152
+ year={2020},
153
+ eprint={2010.12688},
154
+ archivePrefix={arXiv},
155
+ primaryClass={cs.CL}
156
+ }
157
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into\na natural language sentence(s). This dataset consists of English KG data converted into paired natural language text.\nThe generated corpus consists of \u223c18M sentences spanning \u223c45M triples with \u223c1500 distinct relations.\n", "citation": "@misc{agarwal2020large,\n title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training},\n author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou},\n year={2020},\n eprint={2010.12688},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/google-research-datasets/KELM-corpus", "license": "", "features": {"triple": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kelm", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1343187306, "num_examples": 6371131, "dataset_name": "kelm"}, "validation": {"name": "validation", "num_bytes": 167790917, "num_examples": 796471, "dataset_name": "kelm"}, "test": {"name": "test", "num_bytes": 167921750, "num_examples": 796493, "dataset_name": "kelm"}}, "download_checksums": {"https://storage.googleapis.com/gresearch/kelm-corpus/quadruples-train.tsv": {"num_bytes": 1305075939, "checksum": "55c1d4e1beaccda979fc7193e192bc48af05bf3357bd7c14b93ba750fca91c55"}, "https://storage.googleapis.com/gresearch/kelm-corpus/quadruples-validation.tsv": {"num_bytes": 163026560, "checksum": "802c26a7856b16f09e5380e54f115bee66d83539cc2f41bb39fbf651b99a31ed"}, "https://storage.googleapis.com/gresearch/kelm-corpus/quadruples-test.tsv": {"num_bytes": 163157370, "checksum": "d41be1cb6feed48d938136b8783ba67a9bfc262fc0614df6208937381de11e36"}}, "download_size": 1631259869, "post_processing_size": null, "dataset_size": 1678899973, "size_in_bytes": 3310159842}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2068588539a4f366e8aa40fe80791e20e8fa75270f9fc91ed12f59c3018bd719
3
+ size 2349
kelm.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)"""
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import csv
22
+
23
+ import datasets
24
+
25
+
26
+ _DESCRIPTION = """\
27
+ Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into
28
+ a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text.
29
+ The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations.
30
+ """
31
+
32
+ _CITATION = """\
33
+ @misc{agarwal2020large,
34
+ title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training},
35
+ author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou},
36
+ year={2020},
37
+ eprint={2010.12688},
38
+ archivePrefix={arXiv},
39
+ primaryClass={cs.CL}
40
+ }
41
+ """
42
+
43
+ _DOWNLOAD_URL = "https://storage.googleapis.com/gresearch/kelm-corpus/quadruples-{}.tsv"
44
+ _WEBPAGE = "https://github.com/google-research-datasets/KELM-corpus"
45
+
46
+
47
+ class KELM(datasets.GeneratorBasedBuilder):
48
+ """Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)"""
49
+
50
+ def _info(self):
51
+ return datasets.DatasetInfo(
52
+ description=_DESCRIPTION,
53
+ features=datasets.Features(
54
+ {
55
+ "triple": datasets.Value("string"),
56
+ "sentence": datasets.Value("string"),
57
+ }
58
+ ),
59
+ homepage=_WEBPAGE,
60
+ citation=_CITATION,
61
+ )
62
+
63
+ def _split_generators(self, dl_manager):
64
+ train_path = dl_manager.download_and_extract(_DOWNLOAD_URL.format("train"))
65
+ validation_path = dl_manager.download_and_extract(_DOWNLOAD_URL.format("validation"))
66
+ test_path = dl_manager.download_and_extract(_DOWNLOAD_URL.format("test"))
67
+
68
+ return [
69
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
70
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": validation_path}),
71
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
72
+ ]
73
+
74
+ def _generate_examples(self, filepath):
75
+ with open(filepath, "r", encoding="utf-8") as csv_file:
76
+ csv_reader = csv.DictReader(csv_file, delimiter="\t", fieldnames=["triple", "sentence"])
77
+ for irow, row in enumerate(csv_reader):
78
+ yield irow, row