asahi417 commited on
Commit
2e03a2c
1 Parent(s): 9a78a95

model update

Browse files
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: en
11
+ datasets:
12
+ - lmqg/qg_squad
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - answer extraction
16
+ widget:
17
+ - text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress."
18
+ example_title: "Answering Extraction Example 1"
19
+ - text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>"
20
+ example_title: "Answering Extraction Example 2"
21
+ model-index:
22
+ - name: lmqg/flan-t5-base-squad-ae
23
+ results:
24
+ - task:
25
+ name: Text2text Generation
26
+ type: text2text-generation
27
+ dataset:
28
+ name: lmqg/qg_squad
29
+ type: default
30
+ args: default
31
+ metrics:
32
+ - name: BLEU4 (Answer Extraction)
33
+ type: bleu4_answer_extraction
34
+ value: 44.15
35
+ - name: ROUGE-L (Answer Extraction)
36
+ type: rouge_l_answer_extraction
37
+ value: 68.88
38
+ - name: METEOR (Answer Extraction)
39
+ type: meteor_answer_extraction
40
+ value: 43.3
41
+ - name: BERTScore (Answer Extraction)
42
+ type: bertscore_answer_extraction
43
+ value: 91.56
44
+ - name: MoverScore (Answer Extraction)
45
+ type: moverscore_answer_extraction
46
+ value: 81.79
47
+ - name: AnswerF1Score (Answer Extraction)
48
+ type: answer_f1_score__answer_extraction
49
+ value: 69.41
50
+ - name: AnswerExactMatch (Answer Extraction)
51
+ type: answer_exact_match_answer_extraction
52
+ value: 58.16
53
+ ---
54
+
55
+ # Model Card of `lmqg/flan-t5-base-squad-ae`
56
+ This model is fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
+
58
+
59
+ ### Overview
60
+ - **Language model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
61
+ - **Language:** en
62
+ - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
63
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
64
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
65
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
66
+
67
+ ### Usage
68
+ - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
69
+ ```python
70
+ from lmqg import TransformersQG
71
+
72
+ # initialize model
73
+ model = TransformersQG(language="en", model="lmqg/flan-t5-base-squad-ae")
74
+
75
+ # model prediction
76
+ answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes")
77
+
78
+ ```
79
+
80
+ - With `transformers`
81
+ ```python
82
+ from transformers import pipeline
83
+
84
+ pipe = pipeline("text2text-generation", "lmqg/flan-t5-base-squad-ae")
85
+ output = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
86
+
87
+ ```
88
+
89
+ ## Evaluation
90
+
91
+
92
+ - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-base-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
93
+
94
+ | | Score | Type | Dataset |
95
+ |:-----------------|--------:|:--------|:---------------------------------------------------------------|
96
+ | AnswerExactMatch | 58.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
97
+ | AnswerF1Score | 69.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
98
+ | BERTScore | 91.56 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
99
+ | Bleu_1 | 56.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
100
+ | Bleu_2 | 52.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
101
+ | Bleu_3 | 48.02 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
102
+ | Bleu_4 | 44.15 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
103
+ | METEOR | 43.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
104
+ | MoverScore | 81.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
105
+ | ROUGE_L | 68.88 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
106
+
107
+
108
+
109
+ ## Training hyperparameters
110
+
111
+ The following hyperparameters were used during fine-tuning:
112
+ - dataset_path: lmqg/qg_squad
113
+ - dataset_name: default
114
+ - input_types: ['paragraph_sentence']
115
+ - output_types: ['answer']
116
+ - prefix_types: ['ae']
117
+ - model: google/flan-t5-base
118
+ - max_length: 512
119
+ - max_length_output: 32
120
+ - epoch: 8
121
+ - batch: 16
122
+ - lr: 0.0001
123
+ - fp16: False
124
+ - random_seed: 1
125
+ - gradient_accumulation_steps: 4
126
+ - label_smoothing: 0.15
127
+
128
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-base-squad-ae/raw/main/trainer_config.json).
129
+
130
+ ## Citation
131
+ ```
132
+ @inproceedings{ushio-etal-2022-generative,
133
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
134
+ author = "Ushio, Asahi and
135
+ Alva-Manchego, Fernando and
136
+ Camacho-Collados, Jose",
137
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
138
+ month = dec,
139
+ year = "2022",
140
+ address = "Abu Dhabi, U.A.E.",
141
+ publisher = "Association for Computational Linguistics",
142
+ }
143
+
144
+ ```
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<hl>": 32100
3
+ }
config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "lmqg_output/flan-t5-base-squad-ae/model_eszyci/epoch_5",
3
+ "add_prefix": true,
4
+ "architectures": [
5
+ "T5ForConditionalGeneration"
6
+ ],
7
+ "d_ff": 2048,
8
+ "d_kv": 64,
9
+ "d_model": 768,
10
+ "decoder_start_token_id": 0,
11
+ "dense_act_fn": "gelu_new",
12
+ "dropout_rate": 0.1,
13
+ "eos_token_id": 1,
14
+ "feed_forward_proj": "gated-gelu",
15
+ "initializer_factor": 1.0,
16
+ "is_encoder_decoder": true,
17
+ "is_gated_act": true,
18
+ "layer_norm_epsilon": 1e-06,
19
+ "model_type": "t5",
20
+ "n_positions": 512,
21
+ "num_decoder_layers": 12,
22
+ "num_heads": 12,
23
+ "num_layers": 12,
24
+ "output_past": true,
25
+ "pad_token_id": 0,
26
+ "relative_attention_max_distance": 128,
27
+ "relative_attention_num_buckets": 32,
28
+ "task_specific_params": {
29
+ "summarization": {
30
+ "early_stopping": true,
31
+ "length_penalty": 2.0,
32
+ "max_length": 200,
33
+ "min_length": 30,
34
+ "no_repeat_ngram_size": 3,
35
+ "num_beams": 4,
36
+ "prefix": "summarize: "
37
+ },
38
+ "translation_en_to_de": {
39
+ "early_stopping": true,
40
+ "max_length": 300,
41
+ "num_beams": 4,
42
+ "prefix": "translate English to German: "
43
+ },
44
+ "translation_en_to_fr": {
45
+ "early_stopping": true,
46
+ "max_length": 300,
47
+ "num_beams": 4,
48
+ "prefix": "translate English to French: "
49
+ },
50
+ "translation_en_to_ro": {
51
+ "early_stopping": true,
52
+ "max_length": 300,
53
+ "num_beams": 4,
54
+ "prefix": "translate English to Romanian: "
55
+ }
56
+ },
57
+ "tie_word_embeddings": false,
58
+ "torch_dtype": "float32",
59
+ "transformers_version": "4.21.2",
60
+ "use_cache": true,
61
+ "vocab_size": 32101
62
+ }
eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation": {"Bleu_1": 0.5107362944324688, "Bleu_2": 0.46847351711115787, "Bleu_3": 0.4267252954524731, "Bleu_4": 0.3895523369404056, "METEOR": 0.40637457015242434, "ROUGE_L": 0.645770150227703, "BERTScore": 0.9119154193374269, "MoverScore": 0.7860523724146663, "AnswerF1Score": 65.20614496583396, "AnswerExactMatch": 51.19205298013245}, "test": {"Bleu_1": 0.5679610571472901, "Bleu_2": 0.5239227721702817, "Bleu_3": 0.48018117612667777, "Bleu_4": 0.4415458717756335, "METEOR": 0.432999656475851, "ROUGE_L": 0.6887907527195755, "BERTScore": 0.9156043644534364, "MoverScore": 0.8178721482165371, "AnswerF1Score": 69.41163918158276, "AnswerExactMatch": 58.16283573292919}}
eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_squad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_squad.default.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32c395537c337adae0ee252e4a58be06ee4dcf86cd573880410a63c764b10940
3
+ size 990242997
special_tokens_map.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<hl>"
4
+ ],
5
+ "eos_token": "</s>",
6
+ "pad_token": "<pad>",
7
+ "unk_token": "<unk>"
8
+ }
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<extra_id_0>",
4
+ "<extra_id_1>",
5
+ "<extra_id_2>",
6
+ "<extra_id_3>",
7
+ "<extra_id_4>",
8
+ "<extra_id_5>",
9
+ "<extra_id_6>",
10
+ "<extra_id_7>",
11
+ "<extra_id_8>",
12
+ "<extra_id_9>",
13
+ "<extra_id_10>",
14
+ "<extra_id_11>",
15
+ "<extra_id_12>",
16
+ "<extra_id_13>",
17
+ "<extra_id_14>",
18
+ "<extra_id_15>",
19
+ "<extra_id_16>",
20
+ "<extra_id_17>",
21
+ "<extra_id_18>",
22
+ "<extra_id_19>",
23
+ "<extra_id_20>",
24
+ "<extra_id_21>",
25
+ "<extra_id_22>",
26
+ "<extra_id_23>",
27
+ "<extra_id_24>",
28
+ "<extra_id_25>",
29
+ "<extra_id_26>",
30
+ "<extra_id_27>",
31
+ "<extra_id_28>",
32
+ "<extra_id_29>",
33
+ "<extra_id_30>",
34
+ "<extra_id_31>",
35
+ "<extra_id_32>",
36
+ "<extra_id_33>",
37
+ "<extra_id_34>",
38
+ "<extra_id_35>",
39
+ "<extra_id_36>",
40
+ "<extra_id_37>",
41
+ "<extra_id_38>",
42
+ "<extra_id_39>",
43
+ "<extra_id_40>",
44
+ "<extra_id_41>",
45
+ "<extra_id_42>",
46
+ "<extra_id_43>",
47
+ "<extra_id_44>",
48
+ "<extra_id_45>",
49
+ "<extra_id_46>",
50
+ "<extra_id_47>",
51
+ "<extra_id_48>",
52
+ "<extra_id_49>",
53
+ "<extra_id_50>",
54
+ "<extra_id_51>",
55
+ "<extra_id_52>",
56
+ "<extra_id_53>",
57
+ "<extra_id_54>",
58
+ "<extra_id_55>",
59
+ "<extra_id_56>",
60
+ "<extra_id_57>",
61
+ "<extra_id_58>",
62
+ "<extra_id_59>",
63
+ "<extra_id_60>",
64
+ "<extra_id_61>",
65
+ "<extra_id_62>",
66
+ "<extra_id_63>",
67
+ "<extra_id_64>",
68
+ "<extra_id_65>",
69
+ "<extra_id_66>",
70
+ "<extra_id_67>",
71
+ "<extra_id_68>",
72
+ "<extra_id_69>",
73
+ "<extra_id_70>",
74
+ "<extra_id_71>",
75
+ "<extra_id_72>",
76
+ "<extra_id_73>",
77
+ "<extra_id_74>",
78
+ "<extra_id_75>",
79
+ "<extra_id_76>",
80
+ "<extra_id_77>",
81
+ "<extra_id_78>",
82
+ "<extra_id_79>",
83
+ "<extra_id_80>",
84
+ "<extra_id_81>",
85
+ "<extra_id_82>",
86
+ "<extra_id_83>",
87
+ "<extra_id_84>",
88
+ "<extra_id_85>",
89
+ "<extra_id_86>",
90
+ "<extra_id_87>",
91
+ "<extra_id_88>",
92
+ "<extra_id_89>",
93
+ "<extra_id_90>",
94
+ "<extra_id_91>",
95
+ "<extra_id_92>",
96
+ "<extra_id_93>",
97
+ "<extra_id_94>",
98
+ "<extra_id_95>",
99
+ "<extra_id_96>",
100
+ "<extra_id_97>",
101
+ "<extra_id_98>",
102
+ "<extra_id_99>"
103
+ ],
104
+ "eos_token": "</s>",
105
+ "extra_ids": 100,
106
+ "model_max_length": 512,
107
+ "name_or_path": "lmqg_output/flan-t5-base-squad-ae/model_eszyci/epoch_5",
108
+ "pad_token": "<pad>",
109
+ "sp_model_kwargs": {},
110
+ "special_tokens_map_file": "/home/younes_huggingface_co/.cache/huggingface/hub/models--google--t5-v1_1-base/snapshots/650d7745bf1e502d6949b22cc19155cd656d3d4e/special_tokens_map.json",
111
+ "tokenizer_class": "T5Tokenizer",
112
+ "unk_token": "<unk>"
113
+ }
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset_path": "lmqg/qg_squad", "dataset_name": "default", "input_types": ["paragraph_sentence"], "output_types": ["answer"], "prefix_types": ["ae"], "model": "google/flan-t5-base", "max_length": 512, "max_length_output": 32, "epoch": 8, "batch": 16, "lr": 0.0001, "fp16": false, "random_seed": 1, "gradient_accumulation_steps": 4, "label_smoothing": 0.15}