usb / README.md
kundank's picture
typo
beba646
metadata
license: apache-2.0
task_categories:
  - summarization
language:
  - en
tags:
  - factchecking
  - summarization
  - nli
size_categories:
  - 1K<n<10K

USB: A Unified Summarization Benchmark Across Tasks and Domains

This benchmark contains labeled datasets for 8 text summarization based tasks given below. The labeled datasets are created by collecting manual annotations on top of Wikipedia articles from 6 different domains.

Task Description Code snippet
Extractive Summarization Highlight important sentences in the source article load_dataset("kundank/usb","extractive_summarization")
Abstractive Summarization Generate a summary of the source load_dataset("kundank/usb","abstractive_summarization")
Topic-based Summarization Generate a summary of the source focusing on the given topic load_dataset("kundank/usb","topicbased_summarization")
Multi-sentence Compression Compress selected sentences into a one-line summary load_dataset("kundank/usb","multisentence_compression")
Evidence Extraction Surface evidence from the source for a summary sentence load_dataset("kundank/usb","evidence_extraction")
Factuality Classification Predict the factual accuracy of a summary sentence with respect to provided evidence load_dataset("kundank/usb","factuality_classification")
Unsupported Span Prediction Identify spans in a summary sentence which are not substantiated by the provided evidence load_dataset("kundank/usb","unsupported_span_prediction")
Fixing Factuality Rewrite a summary sentence to remove any factual errors or unsupported claims, with respect to provided evidence load_dataset("kundank/usb","fixing_factuality")

Additionally, to load the full set of collected annotations which were leveraged to make the labeled datasets for above tasks, use the command: load_dataset("kundank/usb","all_annotations")

Trained models

We fine-tuned Flan-T5-XL models on the training set of each task in the benchmark. They are available at the links given below:

Task Finetuned Flan-T5-XL model
Extractive Summarization link
Abstractive Summarization link
Topic-based Summarization link
Multi-sentence Compression link
Evidence Extraction link
Factuality Classification link
Unsupported Span Prediction link
Fixing Factuality link

More details can be found in the paper: https://aclanthology.org/2023.findings-emnlp.592/

If you use this dataset, please cite it as below:

@inproceedings{krishna-etal-2023-usb,
    title = "{USB}: A Unified Summarization Benchmark Across Tasks and Domains",
    author = "Krishna, Kundan  and
      Gupta, Prakhar  and
      Ramprasad, Sanjana  and
      Wallace, Byron  and
      Bigham, Jeffrey  and
      Lipton, Zachary",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    year = "2023",
    pages = "8826--8845"
}