asahi417 commited on
Commit
d98dcfe
1 Parent(s): 75df58c

model update

Browse files
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: es
11
+ datasets:
12
+ - lmqg/qg_esquad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - question generation
16
+ - answer extraction
17
+ widget:
18
+ - text: "generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India."
19
+ example_title: "Question Generation Example 1"
20
+ - text: "generate question: a <hl> noviembre <hl> , que es también la estación lluviosa."
21
+ example_title: "Question Generation Example 2"
22
+ - text: "generate question: como <hl> el gobierno de Abbott <hl> que asumió el cargo el 18 de septiembre de 2013."
23
+ example_title: "Question Generation Example 3"
24
+ - text: "<hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como."
25
+ example_title: "Answer Extraction Example 1"
26
+ - text: "<hl> Los estudiosos y los histori a dores están divididos en cuanto a qué evento señala el final de la era helenística. <hl> El período helenístico se puede ver que termina con la conquista final del corazón griego por Roma en 146 a. C. tras la guerra aquea, con la derrota final del reino ptolemaico en la batalla de Actium en 31 a. Helenístico se distingue de helénico en que el primero abarca toda la esfera de influencia griega antigua directa, mientras que el segundo se refiere a la propia Grecia."
27
+ example_title: "Answer Extraction Example 2"
28
+ model-index:
29
+ - name: lmqg/mt5-base-esquad-multitask
30
+ results:
31
+ - task:
32
+ name: Text2text Generation
33
+ type: text2text-generation
34
+ dataset:
35
+ name: lmqg/qg_esquad
36
+ type: default
37
+ args: default
38
+ metrics:
39
+ - name: BLEU4
40
+ type: bleu4
41
+ value: 0.09615312353679026
42
+ - name: ROUGE-L
43
+ type: rouge-l
44
+ value: 0.248238665706148
45
+ - name: METEOR
46
+ type: meteor
47
+ value: 0.23110894133264304
48
+ - name: BERTScore
49
+ type: bertscore
50
+ value: 0.8396973498888792
51
+ - name: MoverScore
52
+ type: moverscore
53
+ value: 0.5915394898094151
54
+ ---
55
+
56
+ # Model Card of `lmqg/mt5-base-esquad-multitask`
57
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the
58
+ [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
59
+ This model is fine-tuned on the answer extraction task as well as the question generation.
60
+
61
+ Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)).
62
+
63
+ ```
64
+
65
+ @inproceedings{ushio-etal-2022-generative,
66
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
67
+ author = "Ushio, Asahi and
68
+ Alva-Manchego, Fernando and
69
+ Camacho-Collados, Jose",
70
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
71
+ month = dec,
72
+ year = "2022",
73
+ address = "Abu Dhabi, U.A.E.",
74
+ publisher = "Association for Computational Linguistics",
75
+ }
76
+
77
+ ```
78
+
79
+ ### Overview
80
+ - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
81
+ - **Language:** es
82
+ - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
83
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
84
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
85
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
86
+
87
+ ### Usage
88
+ - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
89
+ ```python
90
+
91
+ from lmqg import TransformersQG
92
+ # initialize model
93
+ model = TransformersQG(language='es', model='lmqg/mt5-base-esquad-multitask')
94
+ # model prediction
95
+ question_answer = model.generate_qa("a noviembre , que es también la estación lluviosa.")
96
+
97
+ ```
98
+
99
+ - With `transformers`
100
+ ```python
101
+
102
+ from transformers import pipeline
103
+ # initialize model
104
+ pipe = pipeline("text2text-generation", 'lmqg/mt5-base-esquad-multitask')
105
+ # answer extraction
106
+ answer = pipe('extract answers: <hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como.')
107
+ # question generation
108
+ question = pipe('generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.')
109
+
110
+ ```
111
+
112
+ ## Evaluation Metrics
113
+
114
+
115
+ ### Metrics
116
+
117
+ | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
118
+ |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
119
+ | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | default | 0.096 | 0.248 | 0.231 | 0.84 | 0.592 | [link](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) |
120
+
121
+
122
+
123
+
124
+ ## Training hyperparameters
125
+
126
+ The following hyperparameters were used during fine-tuning:
127
+ - dataset_path: lmqg/qg_esquad
128
+ - dataset_name: default
129
+ - input_types: ['paragraph_answer', 'paragraph_sentence']
130
+ - output_types: ['question', 'answer']
131
+ - prefix_types: ['qg', 'ae']
132
+ - model: google/mt5-base
133
+ - max_length: 512
134
+ - max_length_output: 32
135
+ - epoch: 7
136
+ - batch: 32
137
+ - lr: 0.001
138
+ - fp16: False
139
+ - random_seed: 1
140
+ - gradient_accumulation_steps: 2
141
+ - label_smoothing: 0.15
142
+
143
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/trainer_config.json).
144
+
145
+ ## Citation
146
+ ```
147
+
148
+ @inproceedings{ushio-etal-2022-generative,
149
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
150
+ author = "Ushio, Asahi and
151
+ Alva-Manchego, Fernando and
152
+ Camacho-Collados, Jose",
153
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
154
+ month = dec,
155
+ year = "2022",
156
+ address = "Abu Dhabi, U.A.E.",
157
+ publisher = "Association for Computational Linguistics",
158
+ }
159
+
160
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "lmqg_output/mt5_base_esquad_answer/best_model",
3
  "add_prefix": true,
4
  "architectures": [
5
  "MT5ForConditionalGeneration"
 
1
  {
2
+ "_name_or_path": "lmqg_output/mt5_base_esquad_answer/model_atsrpt/epoch_5",
3
  "add_prefix": true,
4
  "architectures": [
5
  "MT5ForConditionalGeneration"
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.2532125336287367, "Bleu_2": 0.17098883468753964, "Bleu_3": 0.12341337230928245, "Bleu_4": 0.09162746233955725}, "test": {"Bleu_1": 0.2577725464729259, "Bleu_2": 0.17608896257516604, "Bleu_3": 0.12795635015279178, "Bleu_4": 0.09582653612445645}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.2646074307746209, "Bleu_2": 0.18007166800250698, "Bleu_3": 0.13073334455597357, "Bleu_4": 0.09746127386059977, "METEOR": 0.2274641929237945, "ROUGE_L": 0.2493624354779036, "BERTScore": 0.8345057822180736, "MoverScore": 0.5847178175669429}, "test": {"Bleu_1": 0.258781903409322, "Bleu_2": 0.1767232204952623, "Bleu_3": 0.12839979353931685, "Bleu_4": 0.09615312353679026, "METEOR": 0.23110894133264304, "ROUGE_L": 0.248238665706148, "BERTScore": 0.8396973498888792, "MoverScore": 0.5915394898094151}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:56bf45f0b5f24d24849fab5b28d8df4e52569b3edab0c9a7610115d7b9e67907
3
- size 2329628621
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6700c6c1ac210d2501c51ba400de8390e5839d39aa1dbcc5371b0a158ebce211
3
+ size 2329632589
tokenizer_config.json CHANGED
@@ -2,7 +2,7 @@
2
  "additional_special_tokens": null,
3
  "eos_token": "</s>",
4
  "extra_ids": 0,
5
- "name_or_path": "lmqg_output/mt5_base_esquad_answer/best_model",
6
  "pad_token": "<pad>",
7
  "sp_model_kwargs": {},
8
  "special_tokens_map_file": "/home/patrick/.cache/torch/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",
 
2
  "additional_special_tokens": null,
3
  "eos_token": "</s>",
4
  "extra_ids": 0,
5
+ "name_or_path": "lmqg_output/mt5_base_esquad_answer/model_atsrpt/epoch_5",
6
  "pad_token": "<pad>",
7
  "sp_model_kwargs": {},
8
  "special_tokens_map_file": "/home/patrick/.cache/torch/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset_path": "lmqg/qg_esquad", "dataset_name": "default", "input_types": ["paragraph_answer", "paragraph_sentence"], "output_types": ["question", "answer"], "prefix_types": ["qg", "ae"], "model": "google/mt5-base", "max_length": 512, "max_length_output": 32, "epoch": 7, "batch": 32, "lr": 0.001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 2, "label_smoothing": 0.15}