asahi417 commited on
Commit
67fabd3
1 Parent(s): 07dbbe8

commit files to HF hub

Browse files
Files changed (1) hide show
  1. README.md +66 -6
README.md CHANGED
@@ -46,6 +46,42 @@ model-index:
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 57.96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ---
50
 
51
  # Model Card of `lmqg/mbart-large-cc25-frquad-qg`
@@ -99,24 +135,48 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
99
  | ROUGE_L | 30.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
 
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  ## Training hyperparameters
104
 
105
  The following hyperparameters were used during fine-tuning:
106
  - dataset_path: lmqg/qg_frquad
107
  - dataset_name: default
108
- - input_types: paragraph_answer
109
- - output_types: question
110
  - prefix_types: None
111
  - model: facebook/mbart-large-cc25
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 7
115
- - batch: 16
116
- - lr: 0.0002
117
  - fp16: False
118
  - random_seed: 1
119
- - gradient_accumulation_steps: 4
120
  - label_smoothing: 0.15
121
 
122
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
 
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 57.96
49
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
50
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
51
+ value: 81.27
52
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
53
+ type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer
54
+ value: 81.25
55
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
56
+ type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer
57
+ value: 81.29
58
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
59
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer
60
+ value: 55.61
61
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
62
+ type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer
63
+ value: 55.6
64
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
65
+ type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
66
+ value: 55.61
67
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
68
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
69
+ value: 75.55
70
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
71
+ type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
72
+ value: 77.16
73
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
74
+ type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
75
+ value: 74.04
76
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
77
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
78
+ value: 51.75
79
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
80
+ type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
81
+ value: 52.52
82
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
83
+ type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
84
+ value: 51.03
85
  ---
86
 
87
  # Model Card of `lmqg/mbart-large-cc25-frquad-qg`
 
135
  | ROUGE_L | 30.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
136
 
137
 
138
+ - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json)
139
+
140
+ | | Score | Type | Dataset |
141
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
142
+ | QAAlignedF1Score (BERTScore) | 81.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
143
+ | QAAlignedF1Score (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
144
+ | QAAlignedPrecision (BERTScore) | 81.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
145
+ | QAAlignedPrecision (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
146
+ | QAAlignedRecall (BERTScore) | 81.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
147
+ | QAAlignedRecall (MoverScore) | 55.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
148
+
149
+
150
+ - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-frquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mbart-large-cc25-frquad-ae.json)
151
+
152
+ | | Score | Type | Dataset |
153
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
154
+ | QAAlignedF1Score (BERTScore) | 75.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
155
+ | QAAlignedF1Score (MoverScore) | 51.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
156
+ | QAAlignedPrecision (BERTScore) | 74.04 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
157
+ | QAAlignedPrecision (MoverScore) | 51.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
158
+ | QAAlignedRecall (BERTScore) | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
159
+ | QAAlignedRecall (MoverScore) | 52.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
160
+
161
+
162
 
163
  ## Training hyperparameters
164
 
165
  The following hyperparameters were used during fine-tuning:
166
  - dataset_path: lmqg/qg_frquad
167
  - dataset_name: default
168
+ - input_types: ['paragraph_answer']
169
+ - output_types: ['question']
170
  - prefix_types: None
171
  - model: facebook/mbart-large-cc25
172
  - max_length: 512
173
  - max_length_output: 32
174
+ - epoch: 8
175
+ - batch: 4
176
+ - lr: 0.001
177
  - fp16: False
178
  - random_seed: 1
179
+ - gradient_accumulation_steps: 16
180
  - label_smoothing: 0.15
181
 
182
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).