asahi417 commited on
Commit
1be914d
1 Parent(s): d9d5ef7

model update

Browse files
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: es
11
+ datasets:
12
+ - lmqg/qg_esquad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - question generation
16
+ - answer extraction
17
+ widget:
18
+ - text: "generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India."
19
+ example_title: "Question Generation Example 1"
20
+ - text: "generate question: a <hl> noviembre <hl> , que es también la estación lluviosa."
21
+ example_title: "Question Generation Example 2"
22
+ - text: "generate question: como <hl> el gobierno de Abbott <hl> que asumió el cargo el 18 de septiembre de 2013."
23
+ example_title: "Question Generation Example 3"
24
+ - text: "extract answers: <hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como."
25
+ example_title: "Answer Extraction Example 1"
26
+ - text: "extract answers: <hl> Los estudiosos y los histori a dores están divididos en cuanto a qué evento señala el final de la era helenística. <hl> El período helenístico se puede ver que termina con la conquista final del corazón griego por Roma en 146 a. C. tras la guerra aquea, con la derrota final del reino ptolemaico en la batalla de Actium en 31 a. Helenístico se distingue de helénico en que el primero abarca toda la esfera de influencia griega antigua directa, mientras que el segundo se refiere a la propia Grecia."
27
+ example_title: "Answer Extraction Example 2"
28
+ model-index:
29
+ - name: lmqg/mbart-large-cc25-esquad-qg-ae
30
+ results:
31
+ - task:
32
+ name: Text2text Generation
33
+ type: text2text-generation
34
+ dataset:
35
+ name: lmqg/qg_esquad
36
+ type: default
37
+ args: default
38
+ metrics:
39
+ - name: BLEU4 (Question Generation)
40
+ type: bleu4_question_generation
41
+ value: 7.61
42
+ - name: ROUGE-L (Question Generation)
43
+ type: rouge_l_question_generation
44
+ value: 20.95
45
+ - name: METEOR (Question Generation)
46
+ type: meteor_question_generation
47
+ value: 19.58
48
+ - name: BERTScore (Question Generation)
49
+ type: bertscore_question_generation
50
+ value: 79.36
51
+ - name: MoverScore (Question Generation)
52
+ type: moverscore_question_generation
53
+ value: 56.05
54
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))
55
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer
56
+ value: 81.13
57
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))
58
+ type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer
59
+ value: 84.91
60
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))
61
+ type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer
62
+ value: 77.75
63
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))
64
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer
65
+ value: 54.86
66
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))
67
+ type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer
68
+ value: 57.16
69
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))
70
+ type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer
71
+ value: 52.82
72
+ - name: BLEU4 (Answer Extraction)
73
+ type: bleu4_answer_extraction
74
+ value: 21.5
75
+ - name: ROUGE-L (Answer Extraction)
76
+ type: rouge_l_answer_extraction
77
+ value: 46.66
78
+ - name: METEOR (Answer Extraction)
79
+ type: meteor_answer_extraction
80
+ value: 40.42
81
+ - name: BERTScore (Answer Extraction)
82
+ type: bertscore_answer_extraction
83
+ value: 86.7
84
+ - name: MoverScore (Answer Extraction)
85
+ type: moverscore_answer_extraction
86
+ value: 77.96
87
+ - name: AnswerF1Score (Answer Extraction)
88
+ type: answer_f1_score__answer_extraction
89
+ value: 70.95
90
+ - name: AnswerExactMatch (Answer Extraction)
91
+ type: answer_exact_match_answer_extraction
92
+ value: 52.81
93
+ ---
94
+
95
+ # Model Card of `lmqg/mbart-large-cc25-esquad-qg-ae`
96
+ This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation and answer extraction jointly on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
97
+
98
+
99
+ ### Overview
100
+ - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
101
+ - **Language:** es
102
+ - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
103
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
104
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
105
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
106
+
107
+ ### Usage
108
+ - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
109
+ ```python
110
+ from lmqg import TransformersQG
111
+
112
+ # initialize model
113
+ model = TransformersQG(language="es", model="lmqg/mbart-large-cc25-esquad-qg-ae")
114
+
115
+ # model prediction
116
+ question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.")
117
+
118
+ ```
119
+
120
+ - With `transformers`
121
+ ```python
122
+ from transformers import pipeline
123
+
124
+ pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-esquad-qg-ae")
125
+
126
+ # answer extraction
127
+ answer = pipe("generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.")
128
+
129
+ # question generation
130
+ question = pipe("extract answers: <hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como.")
131
+
132
+ ```
133
+
134
+ ## Evaluation
135
+
136
+
137
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json)
138
+
139
+ | | Score | Type | Dataset |
140
+ |:-----------|--------:|:--------|:-----------------------------------------------------------------|
141
+ | BERTScore | 79.36 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
142
+ | Bleu_1 | 22.05 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
143
+ | Bleu_2 | 14.55 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
144
+ | Bleu_3 | 10.34 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
145
+ | Bleu_4 | 7.61 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
146
+ | METEOR | 19.58 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
147
+ | MoverScore | 56.05 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
148
+ | ROUGE_L | 20.95 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
149
+
150
+
151
+ - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json)
152
+
153
+ | | Score | Type | Dataset |
154
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
155
+ | QAAlignedF1Score (BERTScore) | 81.13 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
156
+ | QAAlignedF1Score (MoverScore) | 54.86 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
157
+ | QAAlignedPrecision (BERTScore) | 77.75 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
158
+ | QAAlignedPrecision (MoverScore) | 52.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
159
+ | QAAlignedRecall (BERTScore) | 84.91 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
160
+ | QAAlignedRecall (MoverScore) | 57.16 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
161
+
162
+
163
+ - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json)
164
+
165
+ | | Score | Type | Dataset |
166
+ |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
167
+ | AnswerExactMatch | 52.81 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
168
+ | AnswerF1Score | 70.95 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
169
+ | BERTScore | 86.7 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
170
+ | Bleu_1 | 32.77 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
171
+ | Bleu_2 | 28.12 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
172
+ | Bleu_3 | 24.52 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
173
+ | Bleu_4 | 21.5 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
174
+ | METEOR | 40.42 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
175
+ | MoverScore | 77.96 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
176
+ | ROUGE_L | 46.66 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
177
+
178
+
179
+
180
+ ## Training hyperparameters
181
+
182
+ The following hyperparameters were used during fine-tuning:
183
+ - dataset_path: lmqg/qg_esquad
184
+ - dataset_name: default
185
+ - input_types: ['paragraph_answer', 'paragraph_sentence']
186
+ - output_types: ['question', 'answer']
187
+ - prefix_types: ['qg', 'ae']
188
+ - model: facebook/mbart-large-cc25
189
+ - max_length: 512
190
+ - max_length_output: 32
191
+ - epoch: 5
192
+ - batch: 2
193
+ - lr: 0.0001
194
+ - fp16: False
195
+ - random_seed: 1
196
+ - gradient_accumulation_steps: 32
197
+ - label_smoothing: 0.15
198
+
199
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-qg-ae/raw/main/trainer_config.json).
200
+
201
+ ## Citation
202
+ ```
203
+ @inproceedings{ushio-etal-2022-generative,
204
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
205
+ author = "Ushio, Asahi and
206
+ Alva-Manchego, Fernando and
207
+ Camacho-Collados, Jose",
208
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
209
+ month = dec,
210
+ year = "2022",
211
+ address = "Abu Dhabi, U.A.E.",
212
+ publisher = "Association for Computational Linguistics",
213
+ }
214
+
215
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "lmqg_output/mbart-large-cc25-esquad-qg-ae/best_model",
3
  "_num_labels": 3,
4
  "activation_dropout": 0.0,
5
  "activation_function": "gelu",
 
1
  {
2
+ "_name_or_path": "facebook/mbart-large-cc25",
3
  "_num_labels": 3,
4
  "activation_dropout": 0.0,
5
  "activation_function": "gelu",
eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"test": {"QAAlignedF1Score (BERTScore)": 0.8113429885499763, "QAAlignedRecall (BERTScore)": 0.8490986332006565, "QAAlignedPrecision (BERTScore)": 0.7774765527796343, "QAAlignedF1Score (MoverScore)": 0.5486070528416533, "QAAlignedRecall (MoverScore)": 0.5716380497431574, "QAAlignedPrecision (MoverScore)": 0.5281967718842797, "Bleu_1": 0.05822948032911997, "Bleu_2": 0.030091268134689136, "Bleu_3": 0.016054822743160824, "Bleu_4": 0.009378073332048256, "METEOR": 0.169396632910785, "ROUGE_L": 0.10237136968384121, "BERTScore": 0.6433429646678767, "MoverScore": 0.5106862438065741}, "validation": {"QAAlignedF1Score (BERTScore)": 0.8297825592760455, "QAAlignedRecall (BERTScore)": 0.8461027400473051, "QAAlignedPrecision (BERTScore)": 0.8145981613042003, "QAAlignedF1Score (MoverScore)": 0.5606883553486165, "QAAlignedRecall (MoverScore)": 0.5684969656485316, "QAAlignedPrecision (MoverScore)": 0.5532936368883525, "Bleu_1": 0.22129129619303353, "Bleu_2": 0.13407181659833434, "Bleu_3": 0.07877840089633212, "Bleu_4": 0.048665018566226716, "METEOR": 0.28865859809058936, "ROUGE_L": 0.2365157712095888, "BERTScore": 0.7437105527319767, "MoverScore": 0.5458806073741564}}
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.21703463719824068, "Bleu_2": 0.14310524711494194, "Bleu_3": 0.10159969709181843, "Bleu_4": 0.07499782081665739}, "test": {"Bleu_1": 0.21972999413030544, "Bleu_2": 0.1449307115267791, "Bleu_3": 0.10296752824677906, "Bleu_4": 0.07575316070543968}}
eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.30617145964233466, "Bleu_2": 0.26000517940288903, "Bleu_3": 0.22436539391024168, "Bleu_4": 0.19449520324491948, "METEOR": 0.38656333350292127, "ROUGE_L": 0.4559570506227383, "BERTScore": 0.8508820512916191, "MoverScore": 0.7624789600602996, "AnswerF1Score": 68.30308003699466, "AnswerExactMatch": 49.602649006622514}, "test": {"Bleu_1": 0.3277296522497768, "Bleu_2": 0.2812285983314079, "Bleu_3": 0.24523674248843141, "Bleu_4": 0.21496314234590622, "METEOR": 0.40423913314636195, "ROUGE_L": 0.46655240361122036, "BERTScore": 0.8669867067006588, "MoverScore": 0.7796323646195273, "AnswerF1Score": 70.94690814109444, "AnswerExactMatch": 52.80983916745506}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.22791824246802167, "Bleu_2": 0.15144751008737814, "Bleu_3": 0.10820345264848957, "Bleu_4": 0.08021516971733621, "METEOR": 0.19512943996949325, "ROUGE_L": 0.21135950977682252, "BERTScore": 0.7882248660478862, "MoverScore": 0.5568186122012768}, "test": {"Bleu_1": 0.22050787273920536, "Bleu_2": 0.1454958053450473, "Bleu_3": 0.10337125869644873, "Bleu_4": 0.07606421685925231, "METEOR": 0.1958011929681252, "ROUGE_L": 0.20948156819244979, "BERTScore": 0.7936116031225797, "MoverScore": 0.5604688093468514}}
eval/samples.test.hyp.paragraph.questions_answers.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cdffabe2b568203f998a1bfb24639bee7c212e20a11b6faa960944ff8c836369
3
- size 2444580125
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:853faff92f91f9017f192bb06cd45146edab225c07b3ef14e5e4fdc4ac13df44
3
+ size 2444587421
tokenizer_config.json CHANGED
@@ -12,7 +12,7 @@
12
  "single_word": false
13
  },
14
  "model_max_length": 1024,
15
- "name_or_path": "lmqg_output/mbart-large-cc25-esquad-qg-ae/best_model",
16
  "pad_token": "<pad>",
17
  "sep_token": "</s>",
18
  "special_tokens_map_file": null,
 
12
  "single_word": false
13
  },
14
  "model_max_length": 1024,
15
+ "name_or_path": "facebook/mbart-large-cc25",
16
  "pad_token": "<pad>",
17
  "sep_token": "</s>",
18
  "special_tokens_map_file": null,
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset_path": "lmqg/qg_esquad", "dataset_name": "default", "input_types": ["paragraph_answer", "paragraph_sentence"], "output_types": ["question", "answer"], "prefix_types": ["qg", "ae"], "model": "facebook/mbart-large-cc25", "max_length": 512, "max_length_output": 32, "epoch": 5, "batch": 2, "lr": 0.0001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 32, "label_smoothing": 0.15}