asahi417 commited on
Commit
0444c95
1 Parent(s): 2f0bafe
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-3.0
3
+ pretty_name: JaQuAD QG
4
+ languages: ja
5
+ multilinguality: monolingual
6
+ size_categories: 10K<n<100K
7
+ source_datasets: extended|wikipedia
8
+ task_categories: question-generation
9
+ task_ids: question-generation
10
  ---
11
+
12
+ # Dataset Card for "qg_squad"
13
+
14
+ ## Table of Contents
15
+ - [Dataset Description](#dataset-description)
16
+ - [Dataset Summary](#dataset-summary)
17
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
18
+ - [Languages](#languages)
19
+ - [Dataset Structure](#dataset-structure)
20
+ - [Data Instances](#data-instances)
21
+ - [Data Fields](#data-fields)
22
+ - [Data Splits](#data-splits)
23
+ - [Dataset Creation](#dataset-creation)
24
+ - [Curation Rationale](#curation-rationale)
25
+ - [Source Data](#source-data)
26
+ - [Annotations](#annotations)
27
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
28
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
29
+ - [Social Impact of Dataset](#social-impact-of-dataset)
30
+ - [Discussion of Biases](#discussion-of-biases)
31
+ - [Other Known Limitations](#other-known-limitations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+
37
+ ## Dataset Description
38
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
39
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
41
+ - **Size of downloaded dataset files:** 284.1 MB
42
+ - **Size of the generated dataset:** 269 MB
43
+
44
+ ### Dataset Summary
45
+ [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset for question generation (QG) task. The test set of the original
46
+ data is not publicly released, so we randomly sampled test questions from the training set.
47
+
48
+ ### Supported Tasks and Leaderboards
49
+ * `question-generation`: The dataset can be used to train a model for question generation.
50
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L score.
51
+
52
+ ### Languages
53
+ Japanese (ja)
54
+
55
+ ## Dataset Structure
56
+ ### Data Instances
57
+ #### plain_text
58
+ - **Size of downloaded dataset files:** 284.1 MB
59
+ - **Size of the generated dataset:** 269 MB
60
+ An example of 'train' looks as follows.
61
+ ```
62
+ {
63
+ "question": "What is heresy mainly at odds with?",
64
+ "passage": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
65
+ "answer": "established beliefs or customs",
66
+ "sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
67
+ "passage_sentence": "<hl> Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs . <hl> A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
68
+ "passage_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl>. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
69
+ "sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
70
+ }
71
+ ```
72
+ ### Data Fields
73
+ The data fields are the same among all splits.
74
+ #### plain_text
75
+ - `question`: a `string` feature.
76
+ - `passage`: a `string` feature.
77
+ - `answer`: a `string` feature.
78
+ - `sentence`: a `string` feature.
79
+ - `passage_answer`: a `string` feature, which is same as the passage but the answer is highlighted by a special token `<hl>`.
80
+ - `passage_sentence`: a `string` feature, which is same as the passage but a sentence containing the answer is highlighted by a special token `<hl>`.
81
+ - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
82
+
83
+ Each of `passage_answer`, `passage_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
84
+ but with different information. The `passage_answer` and `sentence_answer` features are for answer-aware question generation and
85
+ `passage_sentence` feature is for sentence-aware question generation.
86
+
87
+ ### Data Splits
88
+
89
+ | name |train|validation|test |
90
+ |----------|----:|---------:|----:|
91
+ |plain_text|75722| 10570|11877|
92
+
93
+ ## Dataset Creation
94
+ ### Curation Rationale
95
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
96
+ ### Source Data
97
+ #### Initial Data Collection and Normalization
98
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
+ #### Who are the source language producers?
100
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
+ ### Annotations
102
+ #### Annotation process
103
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
+ #### Who are the annotators?
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+ ### Personal and Sensitive Information
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+ ## Considerations for Using the Data
109
+ ### Social Impact of Dataset
110
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
+ ### Discussion of Biases
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+ ### Other Known Limitations
114
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
+ ## Additional Information
116
+ ### Dataset Curators
117
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
+ ### Licensing Information
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+ ### Citation Information
121
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
data/processed/test00.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/test01.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/test02.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/test03.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train00.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train01.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train02.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train03.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train04.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train05.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train06.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train07.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train08.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train09.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train10.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train11.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train12.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train13.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train14.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train15.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train16.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train17.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train18.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train19.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train20.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train21.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train22.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train23.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train24.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train25.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train26.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/train27.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/validation00.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/validation01.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/validation02.jsonl ADDED
The diff for this file is too large to render. See raw diff
data/processed/validation03.jsonl ADDED
The diff for this file is too large to render. See raw diff
ja_sentence_split.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from typing import List
3
+ import spacy
4
+
5
+ __all__ = 'SentSplit'
6
+
7
+
8
+ class JASplitter:
9
+ """ JA sentence splitter from https://github.com/himkt/konoha/blob/master/konoha/sentence_tokenizer.py """
10
+
11
+ PERIOD = "。"
12
+ PERIOD_SPECIAL = "__PERIOD__"
13
+ PATTERNS = [re.compile(r"(.*?)"), re.compile(r"「.*?」")]
14
+
15
+ @staticmethod
16
+ def conv_period(item) -> str:
17
+ return item.group(0).replace(JASplitter.PERIOD, JASplitter.PERIOD_SPECIAL)
18
+
19
+ def __call__(self, document) -> List[str]:
20
+ for pattern in JASplitter.PATTERNS:
21
+ document = re.sub(pattern, self.conv_period, document)
22
+
23
+ result = []
24
+ for line in document.split("\n"):
25
+ line = line.rstrip()
26
+ line = line.replace("\n", "")
27
+ line = line.replace("\r", "")
28
+ line = line.replace("。", "。\n")
29
+ sentences = line.split("\n")
30
+
31
+ for sentence in sentences:
32
+ if not sentence:
33
+ continue
34
+
35
+ period_special = JASplitter.PERIOD_SPECIAL
36
+ period = JASplitter.PERIOD
37
+ sentence = sentence.replace(period_special, period)
38
+ result.append(sentence)
39
+
40
+ return result
41
+
process.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Script to process raw SQuAD file for Question Generation format
2
+ gsplit -l 1000 -d --additional-suffix=.jsonl train.jsonl train
3
+ gsplit -l 1000 -d --additional-suffix=.jsonl test.jsonl test
4
+ gsplit -l 1000 -d --additional-suffix=.jsonl validation.jsonl validation
5
+ """
6
+ import json
7
+ import os
8
+ import re
9
+ from tqdm import tqdm
10
+ from typing import List, Dict
11
+ from datasets import load_dataset
12
+ from ja_sentence_split import JASplitter
13
+
14
+ HIGHLIGHT_TOKEN = '<hl>'
15
+ SPLITTER = JASplitter()
16
+
17
+
18
+ def get_sentence(document: str):
19
+ return [str(s) for s in SPLITTER(document)]
20
+
21
+
22
+ def process_single_data(data: Dict):
23
+ """ Convert single raw json data into QG format """
24
+ example = {'question': data["question"], 'passage': data["context"]}
25
+
26
+ # check answer
27
+ answer_text = data['answers']['text'][0]
28
+ answer_start = data['answers']['answer_start'][0]
29
+ answer_end = answer_start + len(answer_text)
30
+ assert example['passage'][answer_start: answer_end] == answer_text
31
+ example['answer'] = answer_text
32
+
33
+ # get sentence
34
+ position = example['passage'].find(example['answer'])
35
+ assert position != -1
36
+ before_tmp = get_sentence(example['passage'][:position])
37
+ if len(before_tmp) == 0:
38
+ before = ''
39
+ before_sentence = ''
40
+ else:
41
+ if before_tmp[-1].endswith('。'):
42
+ before = ' '.join(before_tmp)
43
+ before_sentence = ''
44
+ else:
45
+ before = ' '.join(before_tmp[:-1])
46
+ before_sentence = before_tmp[-1]
47
+ after_tmp = get_sentence(example['passage'][position + len(example['answer']):])
48
+ if len(after_tmp) == 0:
49
+ after = ''
50
+ after_sentence = ''
51
+ else:
52
+ after = ' '.join(after_tmp[1:])
53
+ after_sentence = after_tmp[0]
54
+ example['sentence'] = '{}{}{}'.format(before_sentence, example['answer'], after_sentence)
55
+
56
+ # get passage_sentence
57
+ source_text = '{0}{1}{2}{1}{3}'.format(before, HIGHLIGHT_TOKEN, example['sentence'], after)
58
+ example['passage_sentence'] = re.sub(r'\s+', ' ', source_text)
59
+
60
+ # get passage_answer
61
+ source_text = '{0}{1}{2}{1}{3}'.format(
62
+ example['passage'][:position], HIGHLIGHT_TOKEN, example['answer'],
63
+ example['passage'][position + len(example['answer']):])
64
+ example['passage_answer'] = re.sub(r'\s+', ' ', source_text)
65
+
66
+ # get sentence_answer
67
+ before = get_sentence(example['passage'][:position])
68
+ if len(before) == 0 or before[-1].endswith('。'):
69
+ before = ''
70
+ else:
71
+ before = before[-1]
72
+ after = get_sentence(example['passage'][position + len(example['answer']):])
73
+ if len(after) == 0:
74
+ after = ''
75
+ else:
76
+ after = after[0]
77
+ source_text = '{0}{1}{2}{1}{3}'.format(before, HIGHLIGHT_TOKEN, example['answer'], after)
78
+ example['sentence_answer'] = re.sub(r'\s+', ' ', source_text)
79
+ for _k in example.keys():
80
+ example[_k] = example[_k].replace('。\n\n', '。').replace('。\n', '。')
81
+ return example
82
+
83
+
84
+ if __name__ == '__main__':
85
+ jaquad_data = load_dataset("SkelterLabsInc/JaQuAD")
86
+ data_dev = jaquad_data['validation']
87
+ data_train = jaquad_data['train']
88
+ data_train = data_train.shuffle(seed=1)
89
+ data_test = [data_train[i] for i in range(len(data_dev))]
90
+ data_train = [data_train[i] for i in range(len(data_dev), len(data_train))]
91
+
92
+ data_all = {'train': data_train, 'validation': data_dev, 'test': data_test}
93
+
94
+ output = './data/processed'
95
+ os.makedirs(output, exist_ok=True)
96
+ for k, _data in data_all.items():
97
+ with open('{}/{}.jsonl'.format(output, k), 'w') as f:
98
+ for single_data in tqdm(_data):
99
+ single_data = process_single_data(single_data)
100
+ f.write(json.dumps(single_data) + '\n')
qg_jaquad.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+ from datasets import load_dataset
4
+ from datasets.tasks import Summarization
5
+
6
+ logger = datasets.logging.get_logger(__name__)
7
+ _DESCRIPTION = """
8
+ [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset for question generation (QG) task. The test set of the original
9
+ data is not publicly released, so we randomly sampled test questions from the training set.
10
+ """
11
+ _URL = 'https://huggingface.co/datasets/asahi417/qg_jaquad/raw/main/data/processed'
12
+ _URLS = {
13
+ 'train': ['{}/train{:02d}.jsonl'.format(_URL, i) for i in range(28)],
14
+ 'test': ['{}/test{:02d}.jsonl'.format(_URL, i) for i in range(4)],
15
+ 'validation': ['{}/validation{:02d}.jsonl'.format(_URL, i) for i in range(4)]
16
+ }
17
+
18
+
19
+ class QGJaquadConfig(datasets.BuilderConfig):
20
+ """BuilderConfig for SquadQG"""
21
+
22
+ def __init__(self, **kwargs):
23
+ """BuilderConfig for SquadQG.
24
+ Args:
25
+ **kwargs: keyword arguments forwarded to super.
26
+ """
27
+ super(SquadQGConfig, self).__init__(**kwargs)
28
+
29
+
30
+ class QGJaquad(datasets.GeneratorBasedBuilder):
31
+
32
+ def _info(self):
33
+ return datasets.DatasetInfo(
34
+ description=_DESCRIPTION,
35
+ features=datasets.Features(
36
+ {
37
+ "answer": datasets.Value("string"),
38
+ "question": datasets.Value("string"),
39
+ "sentence": datasets.Value("string"),
40
+ "passage": datasets.Value("string"),
41
+ "sentence_answer": datasets.Value("string"),
42
+ "passage_answer": datasets.Value("string"),
43
+ "passage_sentence": datasets.Value("string")
44
+ }
45
+ ),
46
+ supervised_keys=None,
47
+ task_templates=[
48
+ Summarization(task='question generation', text_column="passage_answer", summary_column='question')
49
+ ],
50
+ homepage="https://github.com/asahi417/lm-question-generation"
51
+ )
52
+
53
+ def _split_generators(self, dl_manager):
54
+ downloaded_file = dl_manager.download_and_extract(_URLS)
55
+ return [
56
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": downloaded_file["train"]}),
57
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepaths": downloaded_file["validation"]}),
58
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": downloaded_file["test"]}),
59
+ ]
60
+
61
+ def _generate_examples(self, filepaths):
62
+ _key = 0
63
+ for filepath in filepaths:
64
+ logger.info("generating examples from = %s", filepath)
65
+ with open(filepath, encoding="utf-8") as f:
66
+ _list = f.read().split('\n')
67
+ if _list[-1] == '':
68
+ _list = _list[:-1]
69
+ for i in _list:
70
+ data = json.loads(i)
71
+ yield _key, data
72
+ _key += 1