Update README.md
Browse files
README.md
CHANGED
@@ -581,15 +581,19 @@ size_categories:
|
|
581 |
|
582 |
# QAPyramid
|
583 |
|
584 |
-
This repo contains the data from our paper "QAPyramid: Fine-grained Evaluation of Content Selection
|
585 |
-
for Text Summarization".
|
586 |
|
587 |
Please visit [here](https://github.com/ZhangShiyue/QAPyramid) for more details of this project.
|
588 |
|
589 |
QAPyramid is built on top of 500 examples from the test set of the CNNDM English news summarization dataset.
|
590 |
|
591 |
-
Humans (crowdsourced workers) decomposed each reference summary into QA pairs.
|
592 |
-
|
593 |
|
|
|
|
|
|
|
594 |
|
|
|
|
|
595 |
|
|
|
|
581 |
|
582 |
# QAPyramid
|
583 |
|
584 |
+
This repo contains the data from our paper "QAPyramid: Fine-grained Evaluation of Content Selection for Text Summarization".
|
|
|
585 |
|
586 |
Please visit [here](https://github.com/ZhangShiyue/QAPyramid) for more details of this project.
|
587 |
|
588 |
QAPyramid is built on top of 500 examples from the test set of the CNNDM English news summarization dataset.
|
589 |
|
590 |
+
Humans (crowdsourced workers) decomposed each reference summary into QA pairs following the QA-SRL framework.
|
|
|
591 |
|
592 |
+
On a 50-example subset, we get model-generated summaries from 10 summarization systems. 5 of them are models finetuned on the CNNDM training
|
593 |
+
set (BART, PEGASUS, BRIO, BRIO-EXT, MatchSum), and the other 5 models are 1-shot LLMs
|
594 |
+
(Llama-3-8b-instruct, Llama-3-70b-instruct, Mixtral-8*7b-instruct, Mixtral-8\*22b-instruct, GPT4).
|
595 |
|
596 |
+
Humans (crowdsourced workers) labeled whether each QA pair is *present* (1) or *not present* (0) in the system summary.
|
597 |
+
*present* means the meaning of the QA pair is covered or can be inferred from the system summary.
|
598 |
|
599 |
+
System scores are the scores of each system summary for different metrics.
|