Datasets

21

amazon_reviews_multi

We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an a...

arxiv_dataset

A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.

big_patent

BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Each US patent application is filed under a Cooperative Patent Classification (CPC) code. There are nine such classification categories: A (Human Necessities), B (Performing Operations; Transporting), C (Chemistry; Metallurgy), D...

cnn_dailymail

CNN/DailyMail non-anonymized summarization dataset. There are two features: - article: text of news article, used as the document to be summarized - highlights: joined text of highlights with <s> and </s> around each highlight, which is the target summary

id_liputan6

In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL, an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual BERT...

msr_text_compression

This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-so...

multi_x_science_sum

Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.

orange_sum

The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: Franc...

pn_summary

A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification. It is imperative to consider that the newlines were replaced w...

recipe_nlg

The dataset contains 2231142 cooking recipes (>2 millions). It's processed in more careful way and provides more samples than any other dataset in the area.

scitldr

A new multi-target dataset of 5.4K TLDRs over 3.2K papers. SCITLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.

thaisum

ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists.

wiki_asp

WikiAsp is a multi-domain, aspect-based summarization dataset in the encyclopedic domain. In this task, models are asked to summarize cited reference documents of a Wikipedia article into aspect-based summaries. Each of the 20 domains include 10 domain-specific pre-defined aspects.

wiki_atomic_edits

A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages. An atomic edit is defined as an edit e applied to a natural language expression S as the insertion, deletion, or substitution of a sub-expression P such that both the or...

wiki_lingua

WikiLingua is a large-scale multilingual dataset for the evaluation of crosslingual abstractive summarization systems. The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages was done by aligning the images that are used to describe each how-to step in an ar...

wiki_summary

\ The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT.

xglue

XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to cross-lingual natural language understanding and generation. The benchmark is composed of the following 11 tasks: - NER - POS Tagging (POS) - News Classification (NC) - MLQA - XNLI - PAWS-X - Query-Ad Matching (QADSM) - Web Page Ranki...

annotations_creators: crowdsourced annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: expert-generated annotations_creators: expert-generated annotations_creators: found annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated annotations_creators: machine-generated language_creators: found language_creators: found language_creators: crowdsourced language_creators: expert-generated language_creators: machine-generated language_creators: expert-generated language_creators: crowdsourced language_creators: expert-generated language_creators: found language_creators: found language_creators: machine-generated language_creators: found language_creators: crowdsourced language_creators: expert-generated languages: ar languages: de languages: en languages: es languages: hi languages: vi languages: zh languages: en languages: de languages: es languages: fr languages: ru languages: de languages: en languages: es languages: nl languages: en languages: de languages: es languages: fr languages: ru languages: en languages: de languages: es languages: fr languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: it languages: nl languages: pl languages: ru languages: th languages: tr languages: ur languages: vi languages: zh languages: en languages: de languages: fr languages: en languages: de languages: fr languages: en languages: de languages: fr languages: pt languages: it languages: zh languages: en languages: de languages: fr languages: es languages: it languages: pt languages: zh languages: ar languages: bg languages: de languages: el languages: en languages: es languages: fr languages: hi languages: ru languages: sw languages: th languages: tr languages: ur languages: vi languages: zh licenses: cc-by-sa-4.0 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: other-Licence Universal Dependencies v2.5 licenses: unknown licenses: unknown licenses: unknown licenses: unknown licenses: cc-by-nc-4.0 multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: multilingual multilinguality: translation size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 10K<n<100K size_categories: 10K<n<100K size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M size_categories: 100K<n<1M source_datasets: extended|squad source_datasets: original source_datasets: extended|conll2003 source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: original source_datasets: extended|xnli task_categories: question-answering task_categories: text-classification task_categories: structure-prediction task_categories: conditional-text-generation task_categories: text-classification task_categories: structure-prediction task_categories: text-classification task_categories: text-classification task_categories: conditional-text-generation task_categories: text-classification task_categories: text-classification task_ids: extractive-qa task_ids: open-domain-qa task_ids: topic-classification task_ids: named-entity-recognition task_ids: summarization task_ids: text-classification-other-paraphrase identification task_ids: parsing task_ids: acceptability-classification task_ids: acceptability-classification task_ids: conditional-text-generation-other-question-answering task_ids: acceptability-classification task_ids: natural-language-inference

xsum_factuality

Neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document. The popular metric such as ROUGE fails to show the severity of the problem. The dataset consists of faithfulness and factuality annotations of abstractive summaries for the XSum dataset. We have crowdsourced 3 judgements for...