File size: 6,190 Bytes
3735fe5
 
 
 
 
219844c
3735fe5
219844c
3735fe5
 
 
 
 
 
 
 
 
 
 
 
966aa4b
6256f82
62ca22b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3069753
 
 
62ca22b
 
3735fe5
 
 
 
 
 
 
966aa4b
3735fe5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5facc0
3735fe5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5facc0
 
 
62ca22b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: qa-srl
pretty_name: QA-SRL
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: sent_id
    dtype: string
  - name: predicate_idx
    dtype: int32
  - name: predicate
    dtype: string
  - name: question
    sequence: string
  - name: answers
    sequence: string
  config_name: plain_text
  splits:
  - name: train
    num_bytes: 1835549
    num_examples: 6414
  - name: validation
    num_bytes: 632992
    num_examples: 2183
  - name: test
    num_bytes: 637317
    num_examples: 2201
  download_size: 1087729
  dataset_size: 3105858
---

# Dataset Card for QA-SRL

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [Homepage](https://dada.cs.washington.edu/qasrl/#page-top)
- **Annotation Tool:** [Annotation tool](https://github.com/luheng/qasrl_annotation)
- **Repository:** [Repository](https://dada.cs.washington.edu/qasrl/#dataset)
- **Paper:**    [Qa_srl paper](https://www.aclweb.org/anthology/D15-1076.pdf)
- **Point of Contact:** [Luheng He](luheng@cs.washington.edu)


### Dataset Summary

we model predicate-argument structure of a sentence with a set of question-answer pairs. our method allows practical large-scale annotation of training data. We focus on semantic rather than syntactic annotation, and introduce a scalable method for gathering data that allows both training and evaluation.

### Supported Tasks and Leaderboards

[More Information Needed]

### Languages

This dataset is in english language.

## Dataset Structure

### Data Instances


We use question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contains a verb predicate in the sentence; the answers are phrases in the sentence. For example:

`UCD finished the 2006 championship as Dublin champions , by beating St Vincents in the final .`

Predicate | Question | Answer
---|---|---|
|Finished|Who finished something? |	UCD
|Finished|What did someone finish?|the 2006 championship
|Finished|What did someone finish something as?	|Dublin champions
|Finished|How did someone finish something?	|by beating St Vincents in the final
|beating | Who beat someone? | UCD
|beating|When did someone beat someone?	|in the final
|beating|Who did someone beat?|	St Vincents

### Data Fields

Annotations provided are as follows:

- `sentence`: contains tokenized sentence
- `sent_id`: is the sentence identifier
- `predicate_idx`:the index of the predicate (its position in the sentence) 
- `predicate`: the predicate token
- `question`: contains the question which is a list of tokens. The question always consists of seven slots, as defined in the paper. The empty slots are represented with a marker “_”. The question ends with question mark.
- `answer`: list of answers to the question
               

### Data Splits

Dataset | Sentences | Verbs | QAs
--- | --- | --- |---|
**newswire-train**|744|2020|4904|
**newswire-dev**|249|664|1606|
**newswire-test**|248|652|1599
**Wikipedia-train**|`1174`|`2647`|`6414`|
**Wikipedia-dev**|`392`|`895`|`2183`|
**Wikipedia-test**|`393`|`898`|`2201`|

**Please note**
This dataset only has wikipedia data. Newswire dataset needs  CoNLL-2009 English training data to get the complete data. This training data is under license. Thus, newswire dataset is not included in this data.

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

We annotated over 3000 sentences (nearly 8,000 verbs) in total across two domains: newswire (PropBank) and Wikipedia. 
#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

non-expert annotators were given a short tutorial and a small set of sample annotations (about 10 sentences). Annotators were hired if they showed good understanding of English and the task. The entire screening process usually took less than 2 hours.

#### Who are the annotators?

10 part-time, non-exper annotators from Upwork (Previously oDesk)

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[Luheng He](luheng@cs.washington.edu)

### Licensing Information

[More Information Needed]

### Citation Information

```
@InProceedings{huggingface:dataset,
title = {QA-SRL: Question-Answer Driven Semantic Role Labeling},
authors={Luheng He, Mike Lewis, Luke Zettlemoyer},
year={2015}
publisher = {cs.washington.edu},
howpublished={\\url{https://dada.cs.washington.edu/qasrl/#page-top}},
}
```

### Contributions

Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.