alirezamsh commited on
Commit
1222138
1 Parent(s): 17a0e49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: bsd-3-clause
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: bsd-3-clause
3
+ datasets:
4
+ - mocha
5
+ language:
6
+ - en
7
  ---
8
+
9
+ # Answer Overlap Module of QAFactEval Metric
10
+
11
+
12
+ This is the span scorer module, used in [RQUGE paper]() to evaluate the generated questions of the question generation task.
13
+ The model was originally used in [QAFactEval]() for computing the semantic similarity of the generated answer span, given the reference answer, context, and question in the question answering task.
14
+ It outputs a 1-5 answer overlap score. The scorer is trained on their MOCHA dataset (initialized from [Jia et al. (2021)]()), consisting of 40k crowdsourced judgments on QA model outputs.
15
+
16
+ The input to the model is defined as:
17
+ ```
18
+ [CLS] cand. question [q] gold answer [r] pred answer [c] context
19
+ ```
20
+
21
+ # Citations
22
+
23
+ ```
24
+ @inproceedings{fabbri-etal-2022-qafacteval,
25
+ title = "{QAF}act{E}val: Improved {QA}-Based Factual Consistency Evaluation for Summarization",
26
+ author = "Fabbri, Alexander and
27
+ Wu, Chien-Sheng and
28
+ Liu, Wenhao and
29
+ Xiong, Caiming",
30
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
31
+ month = jul,
32
+ year = "2022",
33
+ address = "Seattle, United States",
34
+ publisher = "Association for Computational Linguistics",
35
+ url = "https://aclanthology.org/2022.naacl-main.187",
36
+ doi = "10.18653/v1/2022.naacl-main.187",
37
+ pages = "2587--2601",
38
+ abstract = "Factual consistency is an essential quality of text summarization models in practical settings. Existing work in evaluating this dimension can be broadly categorized into two lines of research, entailment-based and question answering (QA)-based metrics, and different experimental setups often lead to contrasting conclusions as to which paradigm performs the best. In this work, we conduct an extensive comparison of entailment and QA-based metrics, demonstrating that carefully choosing the components of a QA-based metric, especially question generation and answerability classification, is critical to performance. Building on those insights, we propose an optimized metric, which we call QAFactEval, that leads to a 14{\%} average improvement over previous QA-based metrics on the SummaC factual consistency benchmark, and also outperforms the best-performing entailment-based metric. Moreover, we find that QA-based and entailment-based metrics can offer complementary signals and be combined into a single metric for a further performance boost.",
39
+ }
40
+
41
+ @misc{mohammadshahi2022rquge,
42
+ title={RQUGE: Reference-Free Metric for Evaluating Question Generation by Answering the Question},
43
+ author={Alireza Mohammadshahi and Thomas Scialom and Majid Yazdani and Pouya Yanki and Angela Fan and James Henderson and Marzieh Saeidi},
44
+ year={2022},
45
+ eprint={2211.01482},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CL}
48
+ }
49
+ ```