# Datasets: tau /scrolls

Languages: English
ArXiv:
Dataset Preview
id (string)pid (string)input (string)output (string)
"34_nda-11"
"34_nda-11_0"
"Not mentioned"
"34_nda-16"
"34_nda-16_0"
"Entailment"
"34_nda-15"
"34_nda-15_0"
"Entailment"
"34_nda-10"
"34_nda-10_0"
"Entailment"
"34_nda-2"
"34_nda-2_0"
"Not mentioned"
"34_nda-1"
"34_nda-1_0"
"Entailment"
"34_nda-19"
"34_nda-19_0"
"Entailment"
"34_nda-12"
"34_nda-12_0"
"Entailment"
"34_nda-20"
"34_nda-20_0"
"Not mentioned"
"34_nda-3"
"34_nda-3_0"
"Entailment"
"34_nda-18"
"34_nda-18_0"
"Not mentioned"
"34_nda-7"
"34_nda-7_0"
"Entailment"
"34_nda-17"
"34_nda-17_0"
"Entailment"
"34_nda-8"
"34_nda-8_0"
"Entailment"
"34_nda-13"
"34_nda-13_0"
"Entailment"
"34_nda-5"
"34_nda-5_0"
"Entailment"
"34_nda-4"
"34_nda-4_0"
"Entailment"
"86_nda-11"
"86_nda-11_0"
"Not mentioned"
"86_nda-16"
"86_nda-16_0"
"Not mentioned"
"86_nda-15"
"86_nda-15_0"
"Entailment"

# Dataset Card for SCROLLS

## Overview

SCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference.

The SCROLLS benchmark leaderboard can be found here.

#### GovReport (Huang et al., 2021)

GovReport is a summarization dataset of reports addressing various national policy issues published by the Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary. The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.

#### SummScreenFD (Chen et al., 2021)

SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones). Given a transcript of a specific episode, the goal is to produce the episode's recap. The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.

#### QMSum (Zhong et al., 2021)

QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues. Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.

#### Qasper (Dasigi et al., 2021)

Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC). Questions were written by NLP practitioners after reading only the title and abstract of the papers, while another set of NLP practitioners annotated the answers given the entire document. Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.

#### QuALITY (Pang et al., 2021)

QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, the Open American National Corpus, and more. Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, human annotators must read large portions of the given document. Reference answers were then calculated using the majority vote between of the annotators and writer's answers. To measure the difficulty of their questions, Pang et al. conducted a speed validation process, where another set of annotators were asked to answer questions given only a short period of time to skim through the document. As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.

#### ContractNLI (Koreeda and Manning, 2021)

Contract NLI is a natural language inference dataset in the legal domain. Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract. The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google. The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.

## Data Fields

All the datasets in the benchmark are in the same input-output format

• input: a string feature. The input document.
• output: a string feature. The target.
• id: a string feature. Unique per input.
• pid: a string feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).

## Citation

If you use the SCROLLS data, please make sure to cite all of the original dataset papers. [bibtex]

@misc{shaham2022scrolls,
title={SCROLLS: Standardized CompaRison Over Long Language Sequences},
author={Uri Shaham and Elad Segal and Maor Ivgi and Avia Efrat and Ori Yoran and Adi Haviv and Ankit Gupta and Wenhan Xiong and Mor Geva and Jonathan Berant and Omer Levy},
year={2022},
eprint={2201.03533},
archivePrefix={arXiv},
primaryClass={cs.CL}
}