parquet-converter commited on
Commit
eb6e401
1 Parent(s): 325b54c

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,242 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- language_bcp47:
9
- - en-US
10
- license:
11
- - cc-by-4.0
12
- multilinguality:
13
- - monolingual
14
- size_categories:
15
- - 10K<n<100K
16
- source_datasets:
17
- - original
18
- task_categories:
19
- - question-answering
20
- - text-generation
21
- - fill-mask
22
- task_ids:
23
- - open-domain-qa
24
- - dialogue-modeling
25
- pretty_name: ConvQuestions
26
- dataset_info:
27
- features:
28
- - name: domain
29
- dtype: string
30
- - name: seed_entity
31
- dtype: string
32
- - name: seed_entity_text
33
- dtype: string
34
- - name: questions
35
- sequence: string
36
- - name: answers
37
- sequence:
38
- sequence: string
39
- - name: answer_texts
40
- sequence: string
41
- splits:
42
- - name: train
43
- num_bytes: 3589880
44
- num_examples: 6720
45
- - name: validation
46
- num_bytes: 1241778
47
- num_examples: 2240
48
- - name: test
49
- num_bytes: 1175656
50
- num_examples: 2240
51
- download_size: 3276017
52
- dataset_size: 6007314
53
- ---
54
-
55
- # Dataset Card for ConvQuestions
56
-
57
- ## Table of Contents
58
- - [Dataset Description](#dataset-description)
59
- - [Dataset Summary](#dataset-summary)
60
- - [Supported Tasks](#supported-tasks-and-leaderboards)
61
- - [Languages](#languages)
62
- - [Dataset Structure](#dataset-structure)
63
- - [Data Instances](#data-instances)
64
- - [Data Fields](#data-instances)
65
- - [Data Splits](#data-instances)
66
- - [Dataset Creation](#dataset-creation)
67
- - [Curation Rationale](#curation-rationale)
68
- - [Source Data](#source-data)
69
- - [Annotations](#annotations)
70
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
71
- - [Considerations for Using the Data](#considerations-for-using-the-data)
72
- - [Social Impact of Dataset](#social-impact-of-dataset)
73
- - [Discussion of Biases](#discussion-of-biases)
74
- - [Other Known Limitations](#other-known-limitations)
75
- - [Additional Information](#additional-information)
76
- - [Dataset Curators](#dataset-curators)
77
- - [Licensing Information](#licensing-information)
78
- - [Citation Information](#citation-information)
79
- - [Contributions](#contributions)
80
-
81
- ## Dataset Description
82
-
83
- - **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de)
84
- - **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX)
85
- - **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262)
86
- - **Leaderboard:** [Needs More Information]
87
- - **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
88
-
89
- ### Dataset Summary
90
-
91
- ConvQuestions is the first realistic benchmark for conversational question answering over
92
- knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
93
- They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
94
- with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
95
- The questions feature a variety of complex question phenomena like comparisons, aggregations,
96
- compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
97
- fair comparison across diverse methods. The data gathering setup was kept as natural as
98
- possible, with the annotators selecting entities of their choice from each of the five domains,
99
- and formulating the entire conversation in one session. All questions in a conversation are
100
- from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
101
- graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
102
- guidelines were set. A notable property of ConvQuestions is that several questions are not
103
- answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
104
- be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper
105
- (https://dl.acm.org/citation.cfm?id=3358016).
106
-
107
- ### Supported Tasks and Leaderboards
108
-
109
- [Needs More Information]
110
-
111
- ### Languages
112
-
113
- en
114
-
115
- ## Dataset Structure
116
-
117
- ### Data Instances
118
-
119
- An example of 'train' looks as follows.
120
- ```
121
- {
122
- 'domain': 'music',
123
- 'seed_entity': 'https://www.wikidata.org/wiki/Q223495',
124
- 'seed_entity_text': 'The Carpenters',
125
- 'questions': [
126
- 'When did The Carpenters sign with A&M Records?',
127
- 'What song was their first hit?',
128
- 'When did Karen die?',
129
- 'Karen had what eating problem?',
130
- 'and how did she die?'
131
- ],
132
- 'answers': [
133
- [
134
- '1969'
135
- ],
136
- [
137
- 'https://www.wikidata.org/wiki/Q928282'
138
- ],
139
- [
140
- '1983'
141
- ],
142
- [
143
- 'https://www.wikidata.org/wiki/Q131749'
144
- ],
145
- [
146
- 'https://www.wikidata.org/wiki/Q181754'
147
- ]
148
- ],
149
- 'answer_texts': [
150
- '1969',
151
- '(They Long to Be) Close to You',
152
- '1983',
153
- 'anorexia nervosa',
154
- 'heart failure'
155
- ]
156
- }
157
- ```
158
-
159
- ### Data Fields
160
-
161
- - `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series']
162
- - `seed_entity`: a `string` feature. Wikidata ID of the topic entity.
163
- - `seed_entity_text`: a `string` feature. Surface form of the topic entity.
164
- - `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions).
165
- - `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).
166
- - `answer_texts`: a `list` of `string` features. List of surface forms of the answers.
167
-
168
- ### Data Splits
169
-
170
- |train|validation|tests|
171
- |----:|---------:|----:|
172
- | 6720| 2240| 2240|
173
-
174
- ## Dataset Creation
175
-
176
- ### Curation Rationale
177
-
178
- [Needs More Information]
179
-
180
- ### Source Data
181
-
182
- #### Initial Data Collection and Normalization
183
-
184
- [Needs More Information]
185
-
186
- #### Who are the source language producers?
187
-
188
- [Needs More Information]
189
-
190
- ### Annotations
191
-
192
- #### Annotation process
193
-
194
- With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.
195
-
196
- #### Who are the annotators?
197
-
198
- Local students (Saarland Informatics Campus) and AMT Master Workers.
199
-
200
- ### Personal and Sensitive Information
201
-
202
- [Needs More Information]
203
-
204
- ## Considerations for Using the Data
205
-
206
- ### Social Impact of Dataset
207
-
208
- [Needs More Information]
209
-
210
- ### Discussion of Biases
211
-
212
- [Needs More Information]
213
-
214
- ### Other Known Limitations
215
-
216
- [Needs More Information]
217
-
218
- ## Additional Information
219
-
220
- ### Dataset Curators
221
-
222
- [Needs More Information]
223
-
224
- ### Licensing Information
225
-
226
- The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.
227
-
228
- ### Citation Information
229
-
230
- ```
231
- @InProceedings{christmann2019look,
232
- title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
233
- author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
234
- booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
235
- pages={729--738},
236
- year={2019}
237
- }
238
- ```
239
-
240
- ### Contributions
241
-
242
- Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
conv_questions.py DELETED
@@ -1,154 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """
16
- ConvQuestions is the first realistic benchmark for conversational question answering over
17
- knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
18
- They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
19
- with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
20
- The questions feature a variety of complex question phenomena like comparisons, aggregations,
21
- compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
22
- fair comparison across diverse methods. The data gathering setup was kept as natural as
23
- possible, with the annotators selecting entities of their choice from each of the five domains,
24
- and formulating the entire conversation in one session. All questions in a conversation are
25
- from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
26
- graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
27
- guidelines were set. A notable property of ConvQuestions is that several questions are not
28
- answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
29
- be found in the open Web or in Wikipedia. For details, please refer to our CIKM 2019 full paper
30
- (https://dl.acm.org/citation.cfm?id=3358016).
31
- """
32
-
33
-
34
- import json
35
- import os
36
-
37
- import datasets
38
-
39
-
40
- # Find for instance the citation on arxiv or on the dataset repo/website
41
- _CITATION = """\
42
- @InProceedings{christmann2019look,
43
- title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
44
- author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
45
- booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
46
- pages={729--738},
47
- year={2019}
48
- }
49
- """
50
-
51
- # You can copy an official description
52
- _DESCRIPTION = """\
53
- ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs.
54
- It contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex
55
- question phenomena like comparisons, aggregations, compositionality, and temporal reasoning."""
56
-
57
- _HOMEPAGE = "https://convex.mpi-inf.mpg.de"
58
-
59
- _LICENSE = "CC BY 4.0"
60
-
61
- # The HuggingFace dataset library don't host the datasets but only point to the original files
62
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
63
- _URL = "http://qa.mpi-inf.mpg.de/convex/"
64
- _URLs = {
65
- "train": _URL + "ConvQuestions_train.zip",
66
- "dev": _URL + "ConvQuestions_dev.zip",
67
- "test": _URL + "ConvQuestions_test.zip",
68
- }
69
-
70
-
71
- class ConvQuestions(datasets.GeneratorBasedBuilder):
72
- """ConvQuestions is a realistic benchmark for conversational question answering over knowledge graphs."""
73
-
74
- VERSION = datasets.Version("1.0.0")
75
-
76
- def _info(self):
77
- # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
78
- features = datasets.Features(
79
- {
80
- "domain": datasets.Value("string"),
81
- "seed_entity": datasets.Value("string"),
82
- "seed_entity_text": datasets.Value("string"),
83
- "questions": datasets.features.Sequence(datasets.Value("string")),
84
- "answers": datasets.features.Sequence(datasets.features.Sequence(datasets.Value("string"))),
85
- "answer_texts": datasets.features.Sequence(datasets.Value("string")),
86
- }
87
- )
88
- return datasets.DatasetInfo(
89
- # This is the description that will appear on the datasets page.
90
- description=_DESCRIPTION,
91
- # This defines the different columns of the dataset and their types
92
- features=features, # Here we define them above because they are different between the two configurations
93
- # If there's a common (input, target) tuple from the features,
94
- # specify them here. They'll be used if as_supervised=True in
95
- # builder.as_dataset.
96
- supervised_keys=None,
97
- # Homepage of the dataset for documentation
98
- homepage=_HOMEPAGE,
99
- # License for the dataset if available
100
- license=_LICENSE,
101
- # Citation for the dataset
102
- citation=_CITATION,
103
- )
104
-
105
- def _split_generators(self, dl_manager):
106
- """Returns SplitGenerators."""
107
- # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
108
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
109
-
110
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
111
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
112
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
113
- data_dir = dl_manager.download_and_extract(_URLs)
114
- return [
115
- datasets.SplitGenerator(
116
- name=datasets.Split.TRAIN,
117
- # These kwargs will be passed to _generate_examples
118
- gen_kwargs={
119
- "filepath": os.path.join(data_dir["train"], "train_set/train_set_ALL.json"),
120
- "split": "train",
121
- },
122
- ),
123
- datasets.SplitGenerator(
124
- name=datasets.Split.VALIDATION,
125
- # These kwargs will be passed to _generate_examples
126
- gen_kwargs={
127
- "filepath": os.path.join(data_dir["dev"], "dev_set/dev_set_ALL.json"),
128
- "split": "dev",
129
- },
130
- ),
131
- datasets.SplitGenerator(
132
- name=datasets.Split.TEST,
133
- # These kwargs will be passed to _generate_examples
134
- gen_kwargs={"filepath": os.path.join(data_dir["test"], "test_set/test_set_ALL.json"), "split": "test"},
135
- ),
136
- ]
137
-
138
- def _generate_examples(
139
- self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
140
- ):
141
- """Yields examples as (key, example) tuples."""
142
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
143
- # The `key` is here for legacy reason (tfds) and is not important in itself.
144
- with open(filepath, encoding="utf-8") as f:
145
- data = json.load(f)
146
- for id_, instance in enumerate(data):
147
- yield id_, {
148
- "domain": instance["domain"],
149
- "seed_entity": instance["seed_entity"],
150
- "seed_entity_text": instance["seed_entity_text"],
151
- "questions": [turn["question"] for turn in instance["questions"]],
152
- "answers": [turn["answer"].split(";") for turn in instance["questions"]],
153
- "answer_texts": [turn["answer_text"] for turn in instance["questions"]],
154
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs.\nIt contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex\nquestion phenomena like comparisons, aggregations, compositionality, and temporal reasoning.", "citation": "@InProceedings{christmann2019look,\n title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},\n author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},\n booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},\n pages={729--738},\n year={2019}\n}\n", "homepage": "https://convex.mpi-inf.mpg.de", "license": "CC BY 4.0", "features": {"domain": {"dtype": "string", "id": null, "_type": "Value"}, "seed_entity": {"dtype": "string", "id": null, "_type": "Value"}, "seed_entity_text": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "answer_texts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "conv_questions", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3589880, "num_examples": 6720, "dataset_name": "conv_questions"}, "validation": {"name": "validation", "num_bytes": 1241778, "num_examples": 2240, "dataset_name": "conv_questions"}, "test": {"name": "test", "num_bytes": 1175656, "num_examples": 2240, "dataset_name": "conv_questions"}}, "download_checksums": {"http://qa.mpi-inf.mpg.de/convex/ConvQuestions_train.zip": {"num_bytes": 2139687, "checksum": "093b7ea4106501035e5954213fda6111d0e4747011e8efa558765f2a9705d651"}, "http://qa.mpi-inf.mpg.de/convex/ConvQuestions_dev.zip": {"num_bytes": 594329, "checksum": "91faf376a5f702734c78033e2f357c507291cc3c85d9fda39e65c366f0abc7fd"}, "http://qa.mpi-inf.mpg.de/convex/ConvQuestions_test.zip": {"num_bytes": 542001, "checksum": "698e2a1761b9a0bff6490ccc735df8a1be9b85a7bbd8ed451a1b81ff5a1df28d"}}, "download_size": 3276017, "post_processing_size": null, "dataset_size": 6007314, "size_in_bytes": 9283331}}
 
 
default/conv_questions-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74eaf7a81f5841edf75ae11287698fadec8e0eba1a0a827f21d1d88fbcb77994
3
+ size 113711
default/conv_questions-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f89db1f3c0640989ec1d1816e9dac1b6e8c30b3d380a3de4e38921e26e00f5a
3
+ size 584984
default/conv_questions-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e10eb5564f0b15a750726a6a0c4452dc801759d2b96916380e34eedbfc8a2ab
3
+ size 121166