asahi417 commited on
Commit
49280d7
1 Parent(s): 78fa5bf

model update

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-4.0
4
+ metrics:
5
+ - bleu4
6
+ - meteor
7
+ - rouge-l
8
+ - bertscore
9
+ - moverscore
10
+ language: en
11
+ datasets:
12
+ - lmqg/qg_subjqa
13
+ pipeline_tag: text2text-generation
14
+ tags:
15
+ - question generation
16
+ widget:
17
+ - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
18
+ example_title: "Question Generation Example 1"
19
+ - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
20
+ example_title: "Question Generation Example 2"
21
+ - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
22
+ example_title: "Question Generation Example 3"
23
+ model-index:
24
+ - name: lmqg/t5-large-subjqa-electronics
25
+ results:
26
+ - task:
27
+ name: Text2text Generation
28
+ type: text2text-generation
29
+ dataset:
30
+ name: lmqg/qg_subjqa
31
+ type: electronics
32
+ args: electronics
33
+ metrics:
34
+ - name: BLEU4
35
+ type: bleu4
36
+ value: 0.045708417859748086
37
+ - name: ROUGE-L
38
+ type: rouge-l
39
+ value: 0.30546950426152664
40
+ - name: METEOR
41
+ type: meteor
42
+ value: 0.2756037660672747
43
+ - name: BERTScore
44
+ type: bertscore
45
+ value: 0.9427364025296283
46
+ - name: MoverScore
47
+ type: moverscore
48
+ value: 0.6879662195179278
49
+ - task:
50
+ name: Text2text Generation
51
+ type: text2text-generation
52
+ dataset:
53
+ name: lmqg/qg_squad
54
+ type: default
55
+ args: default
56
+ metrics:
57
+ - name: BLEU4
58
+ type: bleu4
59
+ value: 0.20125600683609723
60
+ - name: ROUGE-L
61
+ type: rouge-l
62
+ value: 0.47296230639660625
63
+ - name: METEOR
64
+ type: meteor
65
+ value: 0.21608768222263117
66
+ - name: BERTScore
67
+ type: bertscore
68
+ value: 0.9052001435177482
69
+ - name: MoverScore
70
+ type: moverscore
71
+ value: 0.6200186297442831
72
+ ---
73
+
74
+ # Language Models Fine-tuning on Question Generation: `lmqg/t5-large-subjqa-electronics`
75
+ This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the
76
+ [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: electronics).
77
+ This model is continuously fine-tuned with [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad).
78
+
79
+ ### Overview
80
+ - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
81
+ - **Language:** en
82
+ - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (electronics)
83
+ - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
84
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
85
+ - **Paper:** [TBA](TBA)
86
+
87
+ ### Usage
88
+ ```python
89
+
90
+ from transformers import pipeline
91
+
92
+ model_path = 'lmqg/t5-large-subjqa-electronics'
93
+ pipe = pipeline("text2text-generation", model_path)
94
+
95
+ # Question Generation
96
+ input_text = 'generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
97
+ question = pipe(input_text)
98
+ ```
99
+
100
+ ## Evaluation Metrics
101
+
102
+
103
+ ### Metrics
104
+
105
+ | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
106
+ |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
107
+ | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 0.045708417859748086 | 0.30546950426152664 | 0.2756037660672747 | 0.9427364025296283 | 0.6879662195179278 | [link](https://huggingface.co/lmqg/t5-large-subjqa-electronics/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
108
+
109
+
110
+
111
+ ### Out-of-domain Metrics
112
+
113
+ | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
114
+ |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
115
+ | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | default | 0.20125600683609723 | 0.47296230639660625 | 0.21608768222263117 | 0.9052001435177482 | 0.6200186297442831 | [link](https://huggingface.co/lmqg/t5-large-subjqa-electronics/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) |
116
+
117
+
118
+ ## Training hyperparameters
119
+
120
+ The following hyperparameters were used during fine-tuning:
121
+ - dataset_path: lmqg/qg_subjqa
122
+ - dataset_name: electronics
123
+ - input_types: ['paragraph_answer']
124
+ - output_types: ['question']
125
+ - prefix_types: ['qg']
126
+ - model: lmqg/t5-large-squad
127
+ - max_length: 512
128
+ - max_length_output: 32
129
+ - epoch: 3
130
+ - batch: 16
131
+ - lr: 0.0001
132
+ - fp16: False
133
+ - random_seed: 1
134
+ - gradient_accumulation_steps: 8
135
+ - label_smoothing: 0.0
136
+
137
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-electronics/raw/main/trainer_config.json).
138
+
139
+ ## Citation
140
+ TBA