asahi417 commited on
Commit
b25f83d
1 Parent(s): b7d11ff

model update

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -46,6 +46,24 @@ model-index:
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 64.56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ---
50
 
51
  # Model Card of `lmqg/mt5-base-ruquad-qg`
@@ -99,6 +117,18 @@ output = pipe("Нелишним будет отметить, что, разви
99
  | ROUGE_L | 33.02 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
100
 
101
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  ## Training hyperparameters
104
 
 
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
  value: 64.56
49
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
50
+ type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
51
+ value: 91.1
52
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
53
+ type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
54
+ value: 91.09
55
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
56
+ type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
57
+ value: 91.11
58
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
59
+ type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
60
+ value: 70.06
61
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
62
+ type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
63
+ value: 70.04
64
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
65
+ type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
66
+ value: 70.07
67
  ---
68
 
69
  # Model Card of `lmqg/mt5-base-ruquad-qg`
 
117
  | ROUGE_L | 33.02 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
118
 
119
 
120
+ - ***Metric (Question & Answer Generation)***: QAG metrics are computed with *the gold answer* and generated question on it for this model, as the model cannot provide an answer. [raw metric file](https://huggingface.co/lmqg/mt5-base-ruquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_ruquad.default.json)
121
+
122
+ | | Score | Type | Dataset |
123
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
124
+ | QAAlignedF1Score (BERTScore) | 91.1 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
125
+ | QAAlignedF1Score (MoverScore) | 70.06 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
126
+ | QAAlignedPrecision (BERTScore) | 91.11 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
127
+ | QAAlignedPrecision (MoverScore) | 70.07 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
128
+ | QAAlignedRecall (BERTScore) | 91.09 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
129
+ | QAAlignedRecall (MoverScore) | 70.04 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
130
+
131
+
132
 
133
  ## Training hyperparameters
134