Datasets:
lmqg
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
asahi417 commited on
Commit
030abe3
1 Parent(s): 22dd37c
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +122 -0
  2. data/processed/dev00.jsonl +0 -0
  3. data/processed/dev01.jsonl +0 -0
  4. data/processed/dev02.jsonl +0 -0
  5. data/processed/dev03.jsonl +0 -0
  6. data/processed/test00.jsonl +0 -0
  7. data/processed/test01.jsonl +0 -0
  8. data/processed/test02.jsonl +0 -0
  9. data/processed/test03.jsonl +0 -0
  10. data/processed/train00.jsonl +0 -0
  11. data/processed/train01.jsonl +0 -0
  12. data/processed/train02.jsonl +0 -0
  13. data/processed/train03.jsonl +0 -0
  14. data/processed/train04.jsonl +0 -0
  15. data/processed/train05.jsonl +0 -0
  16. data/processed/train06.jsonl +0 -0
  17. data/processed/train07.jsonl +0 -0
  18. data/processed/train08.jsonl +0 -0
  19. data/processed/train09.jsonl +0 -0
  20. data/processed/train10.jsonl +0 -0
  21. data/processed/train11.jsonl +0 -0
  22. data/processed/train12.jsonl +0 -0
  23. data/processed/train13.jsonl +0 -0
  24. data/processed/train14.jsonl +0 -0
  25. data/processed/train15.jsonl +0 -0
  26. data/processed/train16.jsonl +0 -0
  27. data/processed/train17.jsonl +0 -0
  28. data/processed/train18.jsonl +0 -0
  29. data/processed/train19.jsonl +0 -0
  30. data/processed/train20.jsonl +0 -0
  31. data/processed/train21.jsonl +0 -0
  32. data/processed/train22.jsonl +0 -0
  33. data/raw/dev.jsonl +0 -0
  34. data/raw/test.jsonl +0 -0
  35. data/raw/train00.jsonl +0 -0
  36. data/raw/train01.jsonl +0 -0
  37. data/raw/train02.jsonl +0 -0
  38. data/raw/train03.jsonl +0 -0
  39. data/raw/train04.jsonl +0 -0
  40. data/raw/train05.jsonl +0 -0
  41. data/raw/train06.jsonl +0 -0
  42. data/raw/train07.jsonl +0 -0
  43. data/raw/train08.jsonl +0 -0
  44. data/raw/train09.jsonl +0 -0
  45. data/raw/train10.jsonl +0 -0
  46. process.py +102 -0
  47. qg_squad.py +64 -0
  48. reference_files/ans-dev-normalized.txt +0 -0
  49. reference_files/ans-dev.txt +0 -0
  50. reference_files/ans-test-normalized.txt +0 -0
README.md CHANGED
@@ -1,3 +1,125 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pretty_name: SQuAD QG
4
+ languages: en
5
+ multilinguality: monolingual
6
+ size_categories: 10K<n<100K
7
+ source_datasets: extended|wikipedia
8
+ task_categories: question-generation
9
+ task_ids: question-generation
10
  ---
11
+
12
+ # Dataset Card for "qg_squad"
13
+
14
+ ## Table of Contents
15
+ - [Dataset Description](#dataset-description)
16
+ - [Dataset Summary](#dataset-summary)
17
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
18
+ - [Languages](#languages)
19
+ - [Dataset Structure](#dataset-structure)
20
+ - [Data Instances](#data-instances)
21
+ - [Data Fields](#data-fields)
22
+ - [Data Splits](#data-splits)
23
+ - [Dataset Creation](#dataset-creation)
24
+ - [Curation Rationale](#curation-rationale)
25
+ - [Source Data](#source-data)
26
+ - [Annotations](#annotations)
27
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
28
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
29
+ - [Social Impact of Dataset](#social-impact-of-dataset)
30
+ - [Discussion of Biases](#discussion-of-biases)
31
+ - [Other Known Limitations](#other-known-limitations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+ - [Contributions](#contributions)
37
+
38
+ ## Dataset Description
39
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
40
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
41
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
42
+ - **Size of downloaded dataset files:** 33.51 MB
43
+ - **Size of the generated dataset:** 85.75 MB
44
+ - **Total amount of disk used:** 119.27 MB
45
+
46
+ ### Dataset Summary
47
+ [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset for question generation (QG) task. The split
48
+ of train/development/test set follows the ["Neural Question Generation"](https://arxiv.org/abs/1705.00106) work and is
49
+ compatible with the [leader board](https://paperswithcode.com/sota/question-generation-on-squad11).
50
+
51
+
52
+ ### Supported Tasks and Leaderboards
53
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+
55
+ ### Languages
56
+ English (en)
57
+
58
+ ## Dataset Structure
59
+ ### Data Instances
60
+ #### plain_text
61
+ - **Size of downloaded dataset files:** 33.51 MB
62
+ - **Size of the generated dataset:** 85.75 MB
63
+ - **Total amount of disk used:** 119.27 MB
64
+ An example of 'train' looks as follows.
65
+ ```
66
+ {
67
+ "question": "What is heresy mainly at odds with?",
68
+ "passage": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs."
69
+ "A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit"
70
+ "renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
71
+ "answer": "established beliefs or customs",
72
+ "sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
73
+ "passage_sentence": "<hl> Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs . <hl> A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
74
+ "passage_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl>. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
75
+ "sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
76
+ }
77
+ ```
78
+ ### Data Fields
79
+ The data fields are the same among all splits.
80
+ #### plain_text
81
+ - `question`: a `string` feature.
82
+ - `passage`: a `string` feature.
83
+ - `answer`: a `string` feature.
84
+ - `sentence`: a `string` feature.
85
+ - `passage_sentence`: a `string` feature.
86
+ - `passage_answer`: a `string` feature.
87
+ - `sentence_answer`: a `string` feature.
88
+
89
+ ### Data Splits
90
+
91
+ | name |train|validation|test |
92
+ |----------|----:|---------:|----:|
93
+ |plain_text|75722| 10570|11877|
94
+
95
+ ## Dataset Creation
96
+ ### Curation Rationale
97
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
+ ### Source Data
99
+ #### Initial Data Collection and Normalization
100
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
+ #### Who are the source language producers?
102
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
+ ### Annotations
104
+ #### Annotation process
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+ #### Who are the annotators?
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+ ### Personal and Sensitive Information
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+ ## Considerations for Using the Data
111
+ ### Social Impact of Dataset
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+ ### Discussion of Biases
114
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
+ ### Other Known Limitations
116
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
+ ## Additional Information
118
+ ### Dataset Curators
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+ ### Licensing Information
121
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
+ ### Citation Information
123
+ ```
124
+ TBA
125
+ ```
data/processed/dev00.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/dev01.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/dev02.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/dev03.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/test00.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/test01.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/test02.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/test03.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train00.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train01.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train02.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train03.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train04.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train06.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train07.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train08.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train09.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train10.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train11.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train12.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train13.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train14.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train15.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train16.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train17.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train18.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train19.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train20.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train21.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/train22.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/dev.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train00.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train01.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train02.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train03.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train04.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train06.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train07.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train08.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train09.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/raw/train10.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
process.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Script to process raw SQuAD file for Question Generation format
2
+ You need to run `python -m spacy download en_core_web_sm`.
3
+ Split when uploading to dataset hub by
4
+ ```
5
+ gsplit -l 3300 -d --additional-suffix=.jsonl train.jsonl train
6
+ gsplit -l 3300 -d --additional-suffix=.jsonl test.jsonl test
7
+ gsplit -l 3300 -d --additional-suffix=.jsonl dev.jsonl dev
8
+ ```
9
+ """
10
+ import json
11
+ import os
12
+ import re
13
+ from glob import glob
14
+ from tqdm import tqdm
15
+ from typing import List, Dict
16
+
17
+ import spacy
18
+
19
+ SPLITTER = spacy.load('en_core_web_sm')
20
+ HIGHLIGHT_TOKEN = '<hl>'
21
+
22
+
23
+ def get_sentence(document: str):
24
+ return [str(s) for s in SPLITTER(document).sents]
25
+
26
+
27
+ def jsonline_reader(filename: str):
28
+ with open(filename, 'r') as f_reader:
29
+ examples = [json.loads(i) for i in f_reader.read().split('\n') if len(i) > 0]
30
+ return examples
31
+
32
+
33
+ def process_single_data(data: Dict):
34
+ """ Convert single raw json data into QG format """
35
+ example = {'question': data["question"], 'passage': data["context"], 'answer': data["answer"]}
36
+
37
+ # get sentence
38
+ position = example['passage'].find(example['answer'])
39
+ assert position != -1
40
+ before_tmp = get_sentence(example['passage'][:position])
41
+ if len(before_tmp) == 0:
42
+ before = ''
43
+ before_sentence = ''
44
+ else:
45
+ if before_tmp[-1].endswith('.'):
46
+ before = ' '.join(before_tmp)
47
+ before_sentence = ''
48
+ else:
49
+ before = ' '.join(before_tmp[:-1])
50
+ before_sentence = before_tmp[-1]
51
+ before_sentence = before_sentence if before_sentence.endswith(' ') else '{} '.format(before_sentence)
52
+ after_tmp = get_sentence(example['passage'][position + len(example['answer']):])
53
+ if len(after_tmp) == 0:
54
+ after = ''
55
+ after_sentence = ''
56
+ else:
57
+ after = ' '.join(after_tmp[1:])
58
+ after_sentence = after_tmp[0]
59
+ after_sentence = after_sentence if after_sentence.startswith(' ') else ' {}'.format(after_sentence)
60
+ example['sentence'] = '{}{}{}'.format(before_sentence, example['answer'], after_sentence)
61
+
62
+ # get passage_sentence
63
+ before = '' if before == '' else '{} '.format(before)
64
+ after = '' if after == '' else ' {}'.format(after)
65
+ source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['sentence'], after)
66
+ example['passage_sentence'] = re.sub(r'\s+', ' ', source_text)
67
+
68
+ # get passage_answer
69
+ source_text = '{0}{1} {2} {1}{3}'.format(
70
+ example['passage'][:position], HIGHLIGHT_TOKEN, example['answer'],
71
+ example['passage'][position + len(example['answer']):])
72
+ example['passage_answer'] = re.sub(r'\s+', ' ', source_text)
73
+
74
+ # get sentence_answer
75
+ before = get_sentence(example['passage'][:position])
76
+ if len(before) == 0 or before[-1].endswith('.'):
77
+ before = ''
78
+ else:
79
+ before = before[-1] if before[-1].endswith(' ') else '{} '.format(before[-1])
80
+ after = get_sentence(example['passage'][position + len(example['answer']):])
81
+ if len(after) == 0:
82
+ after = ''
83
+ else:
84
+ after = after[0] if after[0].startswith(' ') else ' {}'.format(after[0])
85
+ source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['answer'], after)
86
+ example['sentence_answer'] = re.sub(r'\s+', ' ', source_text)
87
+
88
+ return example
89
+
90
+
91
+ if __name__ == '__main__':
92
+ output = './data/processed'
93
+ os.makedirs(output, exist_ok=True)
94
+ path = {'train': 'data/raw/train*.jsonl', 'dev': 'data/raw/dev.jsonl', 'test': 'data/raw/test.jsonl'}
95
+ for k, v in path.items():
96
+ json_data = []
97
+ for _file in sorted(glob(v)):
98
+ json_data += jsonline_reader(_file)
99
+ with open('{}/{}.jsonl'.format(output, k), 'w') as f:
100
+ for single_data in tqdm(json_data):
101
+ single_data = process_single_data(single_data)
102
+ f.write(json.dumps(single_data) + '\n')
qg_squad.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ from datasets.tasks import Summarization
3
+
4
+ _DESCRIPTION = """
5
+ [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) evaluation set for the question generation (QG) models. The split
6
+ of test and development set follows the ["Neural Question Generation"](https://arxiv.org/abs/1705.00106) work and is
7
+ compatible with the [leader board](https://paperswithcode.com/sota/question-generation-on-squad11).
8
+ """
9
+ _URL = 'https://huggingface.co/datasets/asahi417/squad_qg/data/processed'
10
+ _URLS = {
11
+ 'train': ['{}/train{:02d}.jsonl'.format(_URL, i) for i in range(23)],
12
+ 'test': ['{}/test{:02d}.jsonl'.format(_URL, i) for i in range(4)],
13
+ 'validation': ['{}/dev{:02d}.jsonl'.format(_URL, i) for i in range(4)]
14
+ }
15
+
16
+
17
+ class SquadQGConfig(datasets.BuilderConfig):
18
+ """BuilderConfig for SquadQG"""
19
+
20
+ def __init__(self, **kwargs):
21
+ """BuilderConfig for SquadQG.
22
+ Args:
23
+ **kwargs: keyword arguments forwarded to super.
24
+ """
25
+ super(SquadQGConfig, self).__init__(**kwargs)
26
+
27
+
28
+ class SquadQG(datasets.GeneratorBasedBuilder):
29
+
30
+ def _info(self):
31
+ return datasets.DatasetInfo(
32
+ description=_DESCRIPTION,
33
+ features=datasets.Features(
34
+ {
35
+ "answer": datasets.Value("string"),
36
+ "question": datasets.Value("string"),
37
+ "sentence": datasets.Value("string"),
38
+ "passage": datasets.Value("string"),
39
+ "sentence_answer": datasets.Value("string"),
40
+ "passage_answer": datasets.Value("string"),
41
+ "passage_sentence": datasets.Value("string")
42
+ }
43
+ ),
44
+ supervised_keys=None,
45
+ task_templates=[
46
+ Summarization(task='question generation', text_column="passage_answer", summary_column='question')
47
+ ],
48
+ homepage="https://github.com/asahi417/lm-question-generation"
49
+ )
50
+
51
+ def _split_generators(self, dl_manager):
52
+ downloaded_file = dl_manager.download_and_extract(_URLS)
53
+ return [
54
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_file["train"]}),
55
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_file["validation"]}),
56
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_file["test"]}),
57
+ ]
58
+
59
+ def _generate_examples(self, filepath):
60
+ logger.info("generating examples from = %s", filepath)
61
+ with open(filepath, encoding="utf-8") as f:
62
+ for _id, i in enumerate(f.read().split('\n')):
63
+ data = json.loads(i)
64
+ yield _id, data
reference_files/ans-dev-normalized.txt ADDED
The diff for this file is too large to render. See raw diff
 
reference_files/ans-dev.txt ADDED
The diff for this file is too large to render. See raw diff
 
reference_files/ans-test-normalized.txt ADDED
The diff for this file is too large to render. See raw diff