Datasets:
license: mit
task_categories:
- summarization
language:
- en
tags:
- chemistry
- biology
- art
- music
- multi-domain
pretty_name: rqft
size_categories:
- n<1K
Reliable Query-focused Summarization Tester (RQFT) is a dataset to evaluate Query-focused Summarization models. It contains 203 <query, document, summary>
triples which can be used to evaluate Query-focused Summarizing models. Each document has more than 1 query on an average (1.41 to be precise). This is a design choice to tackle Topic Centralization, see Baumel et al., 2016.
For more details, we refer the reader to our EMNLP 2023 paper: Reinforcement Replaces Supervision: Query focused Summarization using
Deep Reinforcement Learning
[Baumel et al., 2016] Tal Baumel, Raphael Cohen, and Michael Elhadad. 2016. Topic concentration in query focused summarization datasets. In AAAI Conference on Artificial Intelligence.
Citation
If you are using this dataset, please cite:
@inproceedings{nath-etal-2023-reinforcement,
title = "Reinforcement Replaces Supervision: Query focused Summarization using Deep Reinforcement Learning",
author = "Nath, Swaroop and
Bhattacharyya, Pushpak and
Khadilkar, Harshad",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.977",
doi = "10.18653/v1/2023.emnlp-main.977",
pages = "15770--15789",
abstract = "Query-focused Summarization (QfS) deals with systems that generate summaries from document(s) based on a query. Motivated by the insight that Reinforcement Learning (RL) provides a generalization to Supervised Learning (SL) for Natural Language Generation, and thereby performs better (empirically) than SL, we use an RL-based approach for this task of QfS. Additionally, we also resolve the conflict of employing RL in Transformers with Teacher Forcing. We develop multiple Policy Gradient networks, trained on various reward signals: ROUGE, BLEU, and Semantic Similarity, which lead to a $\mathit{10}$-point improvement over the State-of-the-Art approach on the ROUGE-L metric for a benchmark dataset (ELI5). We also show performance of our approach in zero-shot setting for another benchmark dataset (DebatePedia) {--} our approach leads to results comparable to baselines, which were specifically trained on DebatePedia. To aid the RL training, we propose a better semantic similarity reward, enabled by a novel Passage Embedding scheme developed using Cluster Hypothesis. Lastly, we contribute a gold-standard test dataset to further research in QfS and Long-form Question Answering (LfQA).",
}