Datasets:
File size: 7,242 Bytes
832b440 f806ee8 3ae3045 832b440 f806ee8 c3f1cbe 832b440 0c8fa30 832b440 d4753a7 80b569d 325b54c 80b569d 832b440 7bc521a 832b440 80b569d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
language_bcp47:
- en-US
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
- text-generation
- fill-mask
task_ids:
- open-domain-qa
- dialogue-modeling
pretty_name: ConvQuestions
dataset_info:
features:
- name: domain
dtype: string
- name: seed_entity
dtype: string
- name: seed_entity_text
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
sequence: string
- name: answer_texts
sequence: string
splits:
- name: train
num_bytes: 3589880
num_examples: 6720
- name: validation
num_bytes: 1241778
num_examples: 2240
- name: test
num_bytes: 1175656
num_examples: 2240
download_size: 3276017
dataset_size: 6007314
---
# Dataset Card for ConvQuestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de)
- **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX)
- **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262)
- **Leaderboard:** [ConvQuestions leaderboard](https://convex.mpi-inf.mpg.de)
- **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
### Dataset Summary
ConvQuestions is the first realistic benchmark for conversational question answering over
knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
The questions feature a variety of complex question phenomena like comparisons, aggregations,
compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
fair comparison across diverse methods. The data gathering setup was kept as natural as
possible, with the annotators selecting entities of their choice from each of the five domains,
and formulating the entire conversation in one session. All questions in a conversation are
from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
guidelines were set. A notable property of ConvQuestions is that several questions are not
answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper
(https://dl.acm.org/citation.cfm?id=3358016).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
en
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'domain': 'music',
'seed_entity': 'https://www.wikidata.org/wiki/Q223495',
'seed_entity_text': 'The Carpenters',
'questions': [
'When did The Carpenters sign with A&M Records?',
'What song was their first hit?',
'When did Karen die?',
'Karen had what eating problem?',
'and how did she die?'
],
'answers': [
[
'1969'
],
[
'https://www.wikidata.org/wiki/Q928282'
],
[
'1983'
],
[
'https://www.wikidata.org/wiki/Q131749'
],
[
'https://www.wikidata.org/wiki/Q181754'
]
],
'answer_texts': [
'1969',
'(They Long to Be) Close to You',
'1983',
'anorexia nervosa',
'heart failure'
]
}
```
### Data Fields
- `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series']
- `seed_entity`: a `string` feature. Wikidata ID of the topic entity.
- `seed_entity_text`: a `string` feature. Surface form of the topic entity.
- `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions).
- `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).
- `answer_texts`: a `list` of `string` features. List of surface forms of the answers.
### Data Splits
|train|validation|tests|
|----:|---------:|----:|
| 6720| 2240| 2240|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.
#### Who are the annotators?
Local students (Saarland Informatics Campus) and AMT Master Workers.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@InProceedings{christmann2019look,
title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
pages={729--738},
year={2019}
}
```
### Contributions
Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset. |