Datasets:
lmqg
/

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Source Datasets:
subjqa
ArXiv:
Tags:
question-generation
License:
asahi417 commited on
Commit
22d4f68
1 Parent(s): 5683f74
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4-0
3
+ pretty_name: SubjQA for question generation
4
+ languages: en
5
+ multilinguality: monolingual
6
+ size_categories: 10K<n<100K
7
+ source_datasets: subjqa
8
+ task_categories: question-generation
9
+ task_ids: question-generation
10
+ ---
11
+
12
+ # Dataset Card for "subjqa"
13
+
14
+ ## Table of Contents
15
+ - [Dataset Description](#dataset-description)
16
+ - [Dataset Summary](#dataset-summary)
17
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
18
+ - [Languages](#languages)
19
+ - [Dataset Structure](#dataset-structure)
20
+ - [Data Instances](#data-instances)
21
+ - [Data Fields](#data-fields)
22
+ - [Data Splits](#data-splits)
23
+ - [Dataset Creation](#dataset-creation)
24
+ - [Curation Rationale](#curation-rationale)
25
+ - [Source Data](#source-data)
26
+ - [Annotations](#annotations)
27
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
28
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
29
+ - [Social Impact of Dataset](#social-impact-of-dataset)
30
+ - [Discussion of Biases](#discussion-of-biases)
31
+ - [Other Known Limitations](#other-known-limitations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+
37
+ ## Dataset Description
38
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
39
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
41
+
42
+ ### Dataset Summary
43
+ Modified version of [SubjQA](https://github.com/megagonlabs/SubjQA) for question generation (QG) task.
44
+
45
+ ### Supported Tasks and Leaderboards
46
+ * `question-generation`: The dataset can be used to train a model for question generation.
47
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L score.
48
+ This task has an active leaderboard which can be found at [here](https://paperswithcode.com/sota/question-generation-on-squad11).
49
+
50
+ ### Languages
51
+ English (en)
52
+
53
+ ## Dataset Structure
54
+ ### Data Instances
55
+ #### plain_text
56
+ - **Size of downloaded dataset files:** 284.1 MB
57
+ - **Size of the generated dataset:** 269 MB
58
+ An example of 'train' looks as follows.
59
+ ```
60
+ {
61
+ "question": "What is heresy mainly at odds with?",
62
+ "paragraph": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
63
+ "answer": "established beliefs or customs",
64
+ "sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
65
+ "paragraph_sentence": "<hl> Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs . <hl> A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
66
+ "paragraph_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl>. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
67
+ "sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
68
+ }
69
+ ```
70
+ ### Data Fields
71
+ The data fields are the same among all splits.
72
+ #### plain_text
73
+ - `question`: a `string` feature.
74
+ - `paragraph`: a `string` feature.
75
+ - `answer`: a `string` feature.
76
+ - `sentence`: a `string` feature.
77
+ - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
78
+ - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
79
+ - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
80
+
81
+ Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
82
+ but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
83
+ `paragraph_sentence` feature is for sentence-aware question generation.
84
+
85
+ ### Data Splits
86
+
87
+ | name |train|validation|test |
88
+ |----------|----:|---------:|----:|
89
+ |plain_text|46306| 8511 |8579 |
90
+
91
+ ## Dataset Creation
92
+ ### Curation Rationale
93
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
+ ### Source Data
95
+ #### Initial Data Collection and Normalization
96
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
+ #### Who are the source language producers?
98
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
+ ### Annotations
100
+ #### Annotation process
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+ #### Who are the annotators?
103
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
+ ### Personal and Sensitive Information
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+ ## Considerations for Using the Data
107
+ ### Social Impact of Dataset
108
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
+ ### Discussion of Biases
110
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
+ ### Other Known Limitations
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+ ## Additional Information
114
+ ### Dataset Curators
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+ ### Licensing Information
117
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
+ ### Citation Information
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
data/processed/books.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/books.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/books.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/electronics.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/electronics.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/electronics.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/grocery.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/grocery.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/grocery.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/movies.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/movies.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/movies.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/restaurants.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/restaurants.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/restaurants.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/tripadvisor.dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/tripadvisor.test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/tripadvisor.train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
process.py CHANGED
@@ -103,6 +103,7 @@ if __name__ == '__main__':
103
  out['question_subj_level'] = int(_df['question_subj_level'])
104
  out['answer_subj_level'] = int(_df['answer_subj_level'])
105
  out['paragraph_id'] = _df['review_id']
 
106
  output.append(out)
107
  with open(f'./data/processed/{i}.{s.replace(".csv", ".jsonl")}', 'w') as f:
108
  f.write('\n'.join([json.dumps(i) for i in output]))
 
103
  out['question_subj_level'] = int(_df['question_subj_level'])
104
  out['answer_subj_level'] = int(_df['answer_subj_level'])
105
  out['paragraph_id'] = _df['review_id']
106
+ out['domain'] = _df['domain']
107
  output.append(out)
108
  with open(f'./data/processed/{i}.{s.replace(".csv", ".jsonl")}', 'w') as f:
109
  f.write('\n'.join([json.dumps(i) for i in output]))
qg_subjqa.py CHANGED
@@ -37,7 +37,8 @@ class QGSubjQA(datasets.GeneratorBasedBuilder):
37
  "paragraph_sentence": datasets.Value("string"),
38
  "paragraph_id": datasets.Value("string"),
39
  "question_subj_level": datasets.Value("int32"),
40
- "answer_subj_level": datasets.Value("int32")
 
41
  }
42
  ),
43
  supervised_keys=None,
 
37
  "paragraph_sentence": datasets.Value("string"),
38
  "paragraph_id": datasets.Value("string"),
39
  "question_subj_level": datasets.Value("int32"),
40
+ "answer_subj_level": datasets.Value("int32"),
41
+ "domain": datasets.Value("string"),
42
  }
43
  ),
44
  supervised_keys=None,