asahi417 commited on
Commit
b0c1016
1 Parent(s): 680da38

model update

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -79,6 +79,39 @@ model-index:
79
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
80
  type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
81
  value: 71.13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  - task:
83
  name: Text2text Generation
84
  type: text2text-generation
@@ -382,6 +415,26 @@ output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as
382
  | ROUGE_L | 42.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
383
 
384
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
385
  - ***Metrics (Question Generation, Out-of-Domain)***
386
 
387
  | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
79
  - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
80
  type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
81
  value: 71.13
82
+ - name: BLEU4 (Question & Answer Generation)
83
+ type: bleu4_question_answer_generation
84
+ value: 3.95
85
+ - name: ROUGE-L (Question & Answer Generation)
86
+ type: rouge_l_question_answer_generation
87
+ value: 25.63
88
+ - name: METEOR (Question & Answer Generation)
89
+ type: meteor_question_answer_generation
90
+ value: 25.64
91
+ - name: BERTScore (Question & Answer Generation)
92
+ type: bertscore_question_answer_generation
93
+ value: 90.91
94
+ - name: MoverScore (Question & Answer Generation)
95
+ type: moverscore_question_answer_generation
96
+ value: 61.98
97
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
98
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
99
+ value: 93.23
100
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
101
+ type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
102
+ value: 93.35
103
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
104
+ type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
105
+ value: 93.13
106
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
107
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
108
+ value: 64.76
109
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
110
+ type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
111
+ value: 64.63
112
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
113
+ type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
114
+ value: 64.98
115
  - task:
116
  name: Text2text Generation
117
  type: text2text-generation
415
  | ROUGE_L | 42.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
416
 
417
 
418
+ - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-large-squad-ae`](https://huggingface.co/lmqg/bart-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-large-squad-ae.json)
419
+
420
+ | | Score | Type | Dataset |
421
+ |:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
422
+ | BERTScore | 90.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
423
+ | Bleu_1 | 26.04 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
424
+ | Bleu_2 | 14.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
425
+ | Bleu_3 | 7.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
426
+ | Bleu_4 | 3.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
427
+ | METEOR | 25.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
428
+ | MoverScore | 61.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
429
+ | QAAlignedF1Score (BERTScore) | 93.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
430
+ | QAAlignedF1Score (MoverScore) | 64.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
431
+ | QAAlignedPrecision (BERTScore) | 93.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
432
+ | QAAlignedPrecision (MoverScore) | 64.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
433
+ | QAAlignedRecall (BERTScore) | 93.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
434
+ | QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
435
+ | ROUGE_L | 25.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
436
+
437
+
438
  - ***Metrics (Question Generation, Out-of-Domain)***
439
 
440
  | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |