asahi417 commited on
Commit
3c695d3
1 Parent(s): 9f700a9

commit files to HF hub

Browse files
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: zh
11
+ datasets:
12
+ - lmqg/qg_zhquad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - question generation
16
+ - answer extraction
17
+ widget:
18
+ - text: "generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
19
+ example_title: "Question Generation Example 1"
20
+ - text: "generate question: 芝加哥大学的<hl> 1960—61 <hl>集团理论年汇集了Daniel Gorenstein、John G. Thompson和Walter Feit等团体理论家,奠定了一个合作的基础,借助于其他众多数学家的输入,1982中对所有有限的简单群进行了分类。这个项目的规模超过了以往的数学研究,无论是证明的长度还是研究人员的数量。目前正在进行研究,以简化这一分类的证明。如今,群论仍然是一个非常活跃的数学分支,影响着许多其他领域"
21
+ example_title: "Question Generation Example 2"
22
+ - text: "extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
23
+ example_title: "Answer Extraction Example 1"
24
+ model-index:
25
+ - name: lmqg/mt5-base-zhquad-qg-ae
26
+ results:
27
+ - task:
28
+ name: Text2text Generation
29
+ type: text2text-generation
30
+ dataset:
31
+ name: lmqg/qg_zhquad
32
+ type: default
33
+ args: default
34
+ metrics:
35
+ - name: BLEU4 (Question Generation)
36
+ type: bleu4_question_generation
37
+ value: 14.63
38
+ - name: ROUGE-L (Question Generation)
39
+ type: rouge_l_question_generation
40
+ value: 34.07
41
+ - name: METEOR (Question Generation)
42
+ type: meteor_question_generation
43
+ value: 23.69
44
+ - name: BERTScore (Question Generation)
45
+ type: bertscore_question_generation
46
+ value: 76.82
47
+ - name: MoverScore (Question Generation)
48
+ type: moverscore_question_generation
49
+ value: 57.24
50
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))
51
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer
52
+ value: 78.4
53
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))
54
+ type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer
55
+ value: 81.92
56
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))
57
+ type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer
58
+ value: 75.27
59
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))
60
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer
61
+ value: 53.55
62
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))
63
+ type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer
64
+ value: 55.82
65
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))
66
+ type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer
67
+ value: 51.56
68
+ - name: BLEU4 (Answer Extraction)
69
+ type: bleu4_answer_extraction
70
+ value: 82.63
71
+ - name: ROUGE-L (Answer Extraction)
72
+ type: rouge_l_answer_extraction
73
+ value: 95.72
74
+ - name: METEOR (Answer Extraction)
75
+ type: meteor_answer_extraction
76
+ value: 71.18
77
+ - name: BERTScore (Answer Extraction)
78
+ type: bertscore_answer_extraction
79
+ value: 99.76
80
+ - name: MoverScore (Answer Extraction)
81
+ type: moverscore_answer_extraction
82
+ value: 98.8
83
+ - name: AnswerF1Score (Answer Extraction)
84
+ type: answer_f1_score__answer_extraction
85
+ value: 95.15
86
+ - name: AnswerExactMatch (Answer Extraction)
87
+ type: answer_exact_match_answer_extraction
88
+ value: 95.07
89
+ ---
90
+
91
+ # Model Card of `lmqg/mt5-base-zhquad-qg-ae`
92
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
93
+
94
+
95
+ ### Overview
96
+ - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
97
+ - **Language:** zh
98
+ - **Training data:** [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (default)
99
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
100
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
101
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
102
+
103
+ ### Usage
104
+ - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
105
+ ```python
106
+ from lmqg import TransformersQG
107
+
108
+ # initialize model
109
+ model = TransformersQG(language="zh", model="lmqg/mt5-base-zhquad-qg-ae")
110
+
111
+ # model prediction
112
+ question_answer_pairs = model.generate_qa("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
113
+
114
+ ```
115
+
116
+ - With `transformers`
117
+ ```python
118
+ from transformers import pipeline
119
+
120
+ pipe = pipeline("text2text-generation", "lmqg/mt5-base-zhquad-qg-ae")
121
+
122
+ # answer extraction
123
+ answer = pipe("generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
124
+
125
+ # question generation
126
+ question = pipe("extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
127
+
128
+ ```
129
+
130
+ ## Evaluation
131
+
132
+
133
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json)
134
+
135
+ | | Score | Type | Dataset |
136
+ |:-----------|--------:|:--------|:-----------------------------------------------------------------|
137
+ | BERTScore | 76.82 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
138
+ | Bleu_1 | 36.9 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
139
+ | Bleu_2 | 25.74 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
140
+ | Bleu_3 | 19.13 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
141
+ | Bleu_4 | 14.63 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
142
+ | METEOR | 23.69 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
143
+ | MoverScore | 57.24 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
144
+ | ROUGE_L | 34.07 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
145
+
146
+
147
+ - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json)
148
+
149
+ | | Score | Type | Dataset |
150
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
151
+ | QAAlignedF1Score (BERTScore) | 78.4 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
152
+ | QAAlignedF1Score (MoverScore) | 53.55 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
153
+ | QAAlignedPrecision (BERTScore) | 75.27 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
154
+ | QAAlignedPrecision (MoverScore) | 51.56 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
155
+ | QAAlignedRecall (BERTScore) | 81.92 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
156
+ | QAAlignedRecall (MoverScore) | 55.82 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
157
+
158
+
159
+ - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json)
160
+
161
+ | | Score | Type | Dataset |
162
+ |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
163
+ | AnswerExactMatch | 95.07 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
164
+ | AnswerF1Score | 95.15 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
165
+ | BERTScore | 99.76 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
166
+ | Bleu_1 | 92.37 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
167
+ | Bleu_2 | 89.37 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
168
+ | Bleu_3 | 86.14 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
169
+ | Bleu_4 | 82.63 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
170
+ | METEOR | 71.18 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
171
+ | MoverScore | 98.8 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
172
+ | ROUGE_L | 95.72 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
173
+
174
+
175
+
176
+ ## Training hyperparameters
177
+
178
+ The following hyperparameters were used during fine-tuning:
179
+ - dataset_path: lmqg/qg_zhquad
180
+ - dataset_name: default
181
+ - input_types: ['paragraph_answer', 'paragraph_sentence']
182
+ - output_types: ['question', 'answer']
183
+ - prefix_types: ['qg', 'ae']
184
+ - model: google/mt5-base
185
+ - max_length: 512
186
+ - max_length_output: 32
187
+ - epoch: 5
188
+ - batch: 32
189
+ - lr: 0.0005
190
+ - fp16: False
191
+ - random_seed: 1
192
+ - gradient_accumulation_steps: 2
193
+ - label_smoothing: 0.15
194
+
195
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-zhquad-qg-ae/raw/main/trainer_config.json).
196
+
197
+ ## Citation
198
+ ```
199
+ @inproceedings{ushio-etal-2022-generative,
200
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
201
+ author = "Ushio, Asahi and
202
+ Alva-Manchego, Fernando and
203
+ Camacho-Collados, Jose",
204
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
205
+ month = dec,
206
+ year = "2022",
207
+ address = "Abu Dhabi, U.A.E.",
208
+ publisher = "Association for Computational Linguistics",
209
+ }
210
+
211
+ ```
eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"test": {"QAAlignedF1Score (BERTScore)": 0.7840325667005541, "QAAlignedRecall (BERTScore)": 0.8192277935277489, "QAAlignedPrecision (BERTScore)": 0.7526823553477937, "QAAlignedF1Score (MoverScore)": 0.5354834983683481, "QAAlignedRecall (MoverScore)": 0.5582311913531273, "QAAlignedPrecision (MoverScore)": 0.5156456203732518, "Bleu_1": 0.004120169646593162, "Bleu_2": 0.0002690462586043532, "Bleu_3": 2.309690680114931e-09, "Bleu_4": 6.809078457750052e-12, "METEOR": 0.18410054149133784, "ROUGE_L": 0.008395744475536369, "BERTScore": 0.6401465189512394, "MoverScore": 0.5144297590022558}, "validation": {"QAAlignedF1Score (BERTScore)": 0.7775678025934519, "QAAlignedRecall (BERTScore)": 0.792034959739694, "QAAlignedPrecision (BERTScore)": 0.7645720974173859, "QAAlignedF1Score (MoverScore)": 0.5296673149872821, "QAAlignedRecall (MoverScore)": 0.5377457296740437, "QAAlignedPrecision (MoverScore)": 0.522520577815569, "Bleu_1": 0.020248651555948103, "Bleu_2": 0.002488586967260189, "Bleu_3": 1.8529426509033387e-08, "Bleu_4": 5.1003887614414895e-11, "METEOR": 0.22259731778597108, "ROUGE_L": 0.03550224398447071, "BERTScore": 0.7134572867191199, "MoverScore": 0.5324894696915295}}
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_zhquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.331403644188708, "Bleu_2": 0.21741208697691133, "Bleu_3": 0.15291393571778292, "Bleu_4": 0.1114773553115748}, "test": {"Bleu_1": 0.36647700419945134, "Bleu_2": 0.25574123119104364, "Bleu_3": 0.19014527826918273, "Bleu_4": 0.14539788962863745}}
eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.9095607235141746, "Bleu_2": 0.8764123107984839, "Bleu_3": 0.8421956410982502, "Bleu_4": 0.8064084363691053, "METEOR": 0.7048075020925044, "ROUGE_L": 0.95025934294886, "BERTScore": 0.9962562629975643, "MoverScore": 0.983760362101422, "AnswerF1Score": 94.32633155253362, "AnswerExactMatch": 94.17192812044682}, "test": {"Bleu_1": 0.9237335154282393, "Bleu_2": 0.8936666994570295, "Bleu_3": 0.8613613683073535, "Bleu_4": 0.8262964699391061, "METEOR": 0.7118335238554365, "ROUGE_L": 0.9572456613078124, "BERTScore": 0.9976168804489449, "MoverScore": 0.9879965808467877, "AnswerF1Score": 95.15483706838734, "AnswerExactMatch": 95.07042253521126}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.3488776348206905, "Bleu_2": 0.23076767559810957, "Bleu_3": 0.16334827219050144, "Bleu_4": 0.11977698407723236, "METEOR": 0.22255230035396023, "ROUGE_L": 0.3153618768839511, "BERTScore": 0.7507793895323698, "MoverScore": 0.5626607296579037}, "test": {"Bleu_1": 0.3689802851596385, "Bleu_2": 0.2574117790800594, "Bleu_3": 0.19134174600366827, "Bleu_4": 0.14633645379328508, "METEOR": 0.23692156158635738, "ROUGE_L": 0.34068667498869315, "BERTScore": 0.7682263112027325, "MoverScore": 0.5724303111254394}}
eval/samples.test.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset_path": "lmqg/qg_zhquad", "dataset_name": "default", "input_types": ["paragraph_answer", "paragraph_sentence"], "output_types": ["question", "answer"], "prefix_types": ["qg", "ae"], "model": "google/mt5-base", "max_length": 512, "max_length_output": 32, "epoch": 5, "batch": 32, "lr": 0.0005, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 2, "label_smoothing": 0.15}