asahi417 commited on
Commit
179ec20
1 Parent(s): 6c45d30

model update

Browse files
Files changed (1) hide show
  1. README.md +6 -52
README.md CHANGED
@@ -46,21 +46,6 @@ model-index:
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 55.88
49
- - name: BLEU4 (Question & Answer Generation (with Gold Answer))
50
- type: bleu4_question_answer_generation_with_gold_answer
51
- value: 0.09
52
- - name: ROUGE-L (Question & Answer Generation (with Gold Answer))
53
- type: rouge_l_question_answer_generation_with_gold_answer
54
- value: 16.18
55
- - name: METEOR (Question & Answer Generation (with Gold Answer))
56
- type: meteor_question_answer_generation_with_gold_answer
57
- value: 19.96
58
- - name: BERTScore (Question & Answer Generation (with Gold Answer))
59
- type: bertscore_question_answer_generation_with_gold_answer
60
- value: 74.4
61
- - name: MoverScore (Question & Answer Generation (with Gold Answer))
62
- type: moverscore_question_answer_generation_with_gold_answer
63
- value: 52.95
64
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
65
  type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
66
  value: 90.66
@@ -79,21 +64,6 @@ model-index:
79
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
80
  type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
81
  value: 65.37
82
- - name: BLEU4 (Question & Answer Generation)
83
- type: bleu4_question_answer_generation
84
- value: 0.0
85
- - name: ROUGE-L (Question & Answer Generation)
86
- type: rouge_l_question_answer_generation
87
- value: 0.64
88
- - name: METEOR (Question & Answer Generation)
89
- type: meteor_question_answer_generation
90
- value: 0.0
91
- - name: BERTScore (Question & Answer Generation)
92
- type: bertscore_question_answer_generation
93
- value: 0.0
94
- - name: MoverScore (Question & Answer Generation)
95
- type: moverscore_question_answer_generation
96
- value: 42.88
97
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
98
  type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
99
  value: 0.0
@@ -169,40 +139,24 @@ output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereins
169
 
170
  | | Score | Type | Dataset |
171
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
172
- | BERTScore | 74.4 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
173
- | Bleu_1 | 14.89 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
174
- | Bleu_2 | 6.69 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
175
- | Bleu_3 | 0.64 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
176
- | Bleu_4 | 0.09 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
177
- | METEOR | 19.96 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
178
- | MoverScore | 52.95 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
179
  | QAAlignedF1Score (BERTScore) | 90.66 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
180
  | QAAlignedF1Score (MoverScore) | 65.36 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
181
  | QAAlignedPrecision (BERTScore) | 90.64 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
182
  | QAAlignedPrecision (MoverScore) | 65.37 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
183
  | QAAlignedRecall (BERTScore) | 90.69 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
184
  | QAAlignedRecall (MoverScore) | 65.36 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
185
- | ROUGE_L | 16.18 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
186
 
187
 
188
  - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-dequad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-dequad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-dequad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.lmqg_mbart-large-cc25-dequad-ae.json)
189
 
190
  | | Score | Type | Dataset |
191
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
192
- | BERTScore | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
193
- | Bleu_1 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
194
- | Bleu_2 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
195
- | Bleu_3 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
196
- | Bleu_4 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
197
- | METEOR | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
198
- | MoverScore | 42.88 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
199
- | QAAlignedF1Score (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
200
- | QAAlignedF1Score (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
201
- | QAAlignedPrecision (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
202
- | QAAlignedPrecision (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
203
- | QAAlignedRecall (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
204
- | QAAlignedRecall (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
205
- | ROUGE_L | 0.64 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
206
 
207
 
208
 
 
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 55.88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
50
  type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
51
  value: 90.66
 
64
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
65
  type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
66
  value: 65.37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
68
  type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
69
  value: 0.0
 
139
 
140
  | | Score | Type | Dataset |
141
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
 
 
 
 
 
 
 
142
  | QAAlignedF1Score (BERTScore) | 90.66 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
143
  | QAAlignedF1Score (MoverScore) | 65.36 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
144
  | QAAlignedPrecision (BERTScore) | 90.64 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
145
  | QAAlignedPrecision (MoverScore) | 65.37 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
146
  | QAAlignedRecall (BERTScore) | 90.69 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
147
  | QAAlignedRecall (MoverScore) | 65.36 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
 
148
 
149
 
150
  - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-dequad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-dequad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-dequad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.lmqg_mbart-large-cc25-dequad-ae.json)
151
 
152
  | | Score | Type | Dataset |
153
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
154
+ | QAAlignedF1Score (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
155
+ | QAAlignedF1Score (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
156
+ | QAAlignedPrecision (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
157
+ | QAAlignedPrecision (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
158
+ | QAAlignedRecall (BERTScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
159
+ | QAAlignedRecall (MoverScore) | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
 
 
 
 
 
 
 
 
160
 
161
 
162