Datasets:
lmqg
/

Languages:
Russian
Multilinguality:
monolingual
Size Categories:
1k<n<10K
Source Datasets:
lmqg/qg_ruquad
ArXiv:
Tags:
question-generation
License:
asahi417 commited on
Commit
47a24fc
1 Parent(s): 33b6467
.gitattributes CHANGED
@@ -52,3 +52,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ data/processed/test.jsonl filter=lfs diff=lfs merge=lfs -text
56
+ data/processed/train.jsonl filter=lfs diff=lfs merge=lfs -text
57
+ data/processed/validation.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ pretty_name: SQuAD for question generation
4
+ language: en
5
+ multilinguality: monolingual
6
+ size_categories: 1k<n<10K
7
+ source_datasets: tweet_qa
8
+ task_categories:
9
+ - text-generation
10
+ task_ids:
11
+ - language-modeling
12
+ tags:
13
+ - question-generation
14
+ ---
15
+
16
+ # Dataset Card for "lmqg/qag_squad"
17
+
18
+
19
+ ## Dataset Description
20
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
21
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
22
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
23
+
24
+ ### Dataset Summary
25
+ This is the question & answer generation dataset based on the SQuAD.
26
+
27
+ ### Supported Tasks and Leaderboards
28
+ * `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
29
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
30
+
31
+ ### Languages
32
+ English (en)
33
+
34
+ ## Dataset Structure
35
+ An example of 'train' looks as follows.
36
+ ```
37
+ {
38
+ "paragraph": "\"4 Minutes\" was released as the album's lead single and peaked at number three on the Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist; \"4 Minutes\" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album, Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million, it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European dates, and after it ended, the total gross was $408 million.",
39
+ "questions": [
40
+ "Which single was released as the album's lead single?",
41
+ "Madonna surpassed which artist with the most top-ten hits?",
42
+ "4 minutes became Madonna's which number one single in the UK?",
43
+ "What is the name of the first tour with Live Nation?",
44
+ "How much did Stick and Sweet Tour grossed?"
45
+ ],
46
+ "answers": [
47
+ "4 Minutes",
48
+ "Elvis Presley",
49
+ "thirteenth",
50
+ "Sticky & Sweet Tour",
51
+ "$280 million,"
52
+ ],
53
+ "questions_answers": "question: Which single was released as the album's lead single?, answer: 4 Minutes | question: Madonna surpassed which artist with the most top-ten hits?, answer: Elvis Presley | question: 4 minutes became Madonna's which number one single in the UK?, answer: thirteenth | question: What is the name of the first tour with Live Nation?, answer: Sticky & Sweet Tour | question: How much did Stick and Sweet Tour grossed?, answer: $280 million,"
54
+ }
55
+ ```
56
+ The data fields are the same among all splits.
57
+ - `questions`: a `list` of `string` features.
58
+ - `answers`: a `list` of `string` features.
59
+ - `paragraph`: a `string` feature.
60
+ - `questions_answers`: a `string` feature.
61
+
62
+ ## Data Splits
63
+
64
+ |train|validation|test |
65
+ |----:|---------:|----:|
66
+ |16462| 2067 | 2429|
67
+
68
+
69
+ ## Citation Information
70
+
71
+ ```
72
+ @inproceedings{ushio-etal-2022-generative,
73
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
74
+ author = "Ushio, Asahi and
75
+ Alva-Manchego, Fernando and
76
+ Camacho-Collados, Jose",
77
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
78
+ month = dec,
79
+ year = "2022",
80
+ address = "Abu Dhabi, U.A.E.",
81
+ publisher = "Association for Computational Linguistics",
82
+ }
83
+ ```
data/processed/test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a5d2ba9de566556c7cc4730fb971eaa4504222ba76232e18697ee56b939a93f
3
+ size 20433594
data/processed/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6062551a42a11d341a0068cdf6f4357a2f01b07c5711104792fe9877e765417
3
+ size 79689865
data/processed/validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ee202406878741ace9761bf2bf9272d574b5e337f3d8fa70f8ee35746f27f62
3
+ size 20828337
process.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from random import seed, shuffle
4
+ import re
5
+ from tqdm import tqdm
6
+ from typing import Dict
7
+ from datasets import load_dataset
8
+
9
+
10
+ SEP_TOKEN = " | "
11
+
12
+
13
+ def create_data(hf_data):
14
+ df = hf_data.to_pandas()
15
+ output = []
16
+ for paragraph, g in df.groupby("paragraph"):
17
+ example = {
18
+ 'paragraph': paragraph.replace(SEP_TOKEN, " "),
19
+ 'questions': [_g.replace(SEP_TOKEN, " ") for _g in g['question']],
20
+ 'answers': [_g.replace(SEP_TOKEN, " ") for _g in g['answer']],
21
+ }
22
+ example["questions_answers"] = SEP_TOKEN.join([f"question: {q}, answer: {a}" for q, a in zip(example["questions"], example["answers"])])
23
+ output.append(example)
24
+ return output
25
+
26
+
27
+ if __name__ == '__main__':
28
+ qg_squad = load_dataset("lmqg/qg_ruquad")
29
+ data_valid = create_data(qg_squad['validation'])
30
+ data_train = create_data(qg_squad['train'])
31
+ data_test = create_data(qg_squad['test'])
32
+ data_all = {'train': data_train, 'validation': data_valid, 'test': data_test}
33
+ output = './data/processed'
34
+ os.makedirs(output, exist_ok=True)
35
+ for k, _data in data_all.items():
36
+ with open('{}/{}.jsonl'.format(output, k), 'w') as f:
37
+ for single_data in tqdm(_data):
38
+ f.write(json.dumps(single_data) + '\n')