asahi417 commited on
Commit
cb578da
·
1 Parent(s): 0a83bf7

commit files to HF hub

Browse files
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: zh
11
+ datasets:
12
+ - lmqg/qg_zhquad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - answer extraction
16
+ widget:
17
+ - text: "南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
18
+ example_title: "Answering Extraction Example 1"
19
+ model-index:
20
+ - name: lmqg/mt5-base-zhquad-ae
21
+ results:
22
+ - task:
23
+ name: Text2text Generation
24
+ type: text2text-generation
25
+ dataset:
26
+ name: lmqg/qg_zhquad
27
+ type: default
28
+ args: default
29
+ metrics:
30
+ - name: BLEU4 (Answer Extraction)
31
+ type: bleu4_answer_extraction
32
+ value: 79.86
33
+ - name: ROUGE-L (Answer Extraction)
34
+ type: rouge_l_answer_extraction
35
+ value: 94.53
36
+ - name: METEOR (Answer Extraction)
37
+ type: meteor_answer_extraction
38
+ value: 68.41
39
+ - name: BERTScore (Answer Extraction)
40
+ type: bertscore_answer_extraction
41
+ value: 99.48
42
+ - name: MoverScore (Answer Extraction)
43
+ type: moverscore_answer_extraction
44
+ value: 97.97
45
+ - name: AnswerF1Score (Answer Extraction)
46
+ type: answer_f1_score__answer_extraction
47
+ value: 92.68
48
+ - name: AnswerExactMatch (Answer Extraction)
49
+ type: answer_exact_match_answer_extraction
50
+ value: 92.62
51
+ ---
52
+
53
+ # Model Card of `lmqg/mt5-base-zhquad-ae`
54
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
55
+
56
+
57
+ ### Overview
58
+ - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
59
+ - **Language:** zh
60
+ - **Training data:** [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (default)
61
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
62
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
63
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
64
+
65
+ ### Usage
66
+ - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
67
+ ```python
68
+ from lmqg import TransformersQG
69
+
70
+ # initialize model
71
+ model = TransformersQG(language="zh", model="lmqg/mt5-base-zhquad-ae")
72
+
73
+ # model prediction
74
+ answers = model.generate_a("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
75
+
76
+ ```
77
+
78
+ - With `transformers`
79
+ ```python
80
+ from transformers import pipeline
81
+
82
+ pipe = pipeline("text2text-generation", "lmqg/mt5-base-zhquad-ae")
83
+ output = pipe("南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
84
+
85
+ ```
86
+
87
+ ## Evaluation
88
+
89
+
90
+ - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json)
91
+
92
+ | | Score | Type | Dataset |
93
+ |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
94
+ | AnswerExactMatch | 92.62 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
95
+ | AnswerF1Score | 92.68 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
96
+ | BERTScore | 99.48 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
97
+ | Bleu_1 | 90.95 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
98
+ | Bleu_2 | 87.44 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
99
+ | Bleu_3 | 83.75 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
100
+ | Bleu_4 | 79.86 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
101
+ | METEOR | 68.41 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
102
+ | MoverScore | 97.97 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
103
+ | ROUGE_L | 94.53 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
104
+
105
+
106
+
107
+ ## Training hyperparameters
108
+
109
+ The following hyperparameters were used during fine-tuning:
110
+ - dataset_path: lmqg/qg_zhquad
111
+ - dataset_name: default
112
+ - input_types: ['paragraph_sentence']
113
+ - output_types: ['answer']
114
+ - prefix_types: None
115
+ - model: google/mt5-base
116
+ - max_length: 512
117
+ - max_length_output: 32
118
+ - epoch: 18
119
+ - batch: 8
120
+ - lr: 0.0001
121
+ - fp16: False
122
+ - random_seed: 1
123
+ - gradient_accumulation_steps: 8
124
+ - label_smoothing: 0.15
125
+
126
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-zhquad-ae/raw/main/trainer_config.json).
127
+
128
+ ## Citation
129
+ ```
130
+ @inproceedings{ushio-etal-2022-generative,
131
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
132
+ author = "Ushio, Asahi and
133
+ Alva-Manchego, Fernando and
134
+ Camacho-Collados, Jose",
135
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
136
+ month = dec,
137
+ year = "2022",
138
+ address = "Abu Dhabi, U.A.E.",
139
+ publisher = "Association for Computational Linguistics",
140
+ }
141
+
142
+ ```
eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.8821431499548708, "Bleu_2": 0.842715525856889, "Bleu_3": 0.8031818661660554, "Bleu_4": 0.7626687095604595, "METEOR": 0.663477475311334, "ROUGE_L": 0.926675722073625, "BERTScore": 0.9885125362710967, "MoverScore": 0.9655442642159453, "AnswerF1Score": 89.34422627720345, "AnswerExactMatch": 89.23020883924235}, "test": {"Bleu_1": 0.9094807255263107, "Bleu_2": 0.8743577806268511, "Bleu_3": 0.8374906862795883, "Bleu_4": 0.798584672727377, "METEOR": 0.6840866066914842, "ROUGE_L": 0.9453220529418422, "BERTScore": 0.9947916523505033, "MoverScore": 0.9796722865505375, "AnswerF1Score": 92.67735724692986, "AnswerExactMatch": 92.61777561923263}}
eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset_path": "lmqg/qg_zhquad", "dataset_name": "default", "input_types": ["paragraph_sentence"], "output_types": ["answer"], "prefix_types": null, "model": "google/mt5-base", "max_length": 512, "max_length_output": 32, "epoch": 18, "batch": 8, "lr": 0.0001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 8, "label_smoothing": 0.15}