Back to home
All Datasets 🏷
All datasets from our datasets repository and community bucket.
Also check out the list of supported Metrics 📉.
30 results
AdversarialQA is a Reading Comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging.
Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on klejbenchmark.com.
ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations, as described in "ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations". The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators. The corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems.
A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.
The DaNE dataset has been annotated with Named Entities for PER, ORG and LOC by the Alexandra Institute. It is a reannotation of the UD-DDT (Universal Dependency - Danish Dependency Treebank) which has annotations for dependency parsing and part-of-speech (POS) tagging. The Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998).
The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/1706.09254
An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper: Ondřej Dušek, David M. Howcroft, and Verena Rieser (2019): Semantic Noise Matters for Neural Natural Language Generation. In INLG, Tokyo, Japan.
WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. This resource bears the name EAC Translation Memory, short EAC-TM. ECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. All documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.
EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.
`generated_reviews_enth` Generated product reviews dataset for machine translation quality prediction, part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.
HoVer is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It is collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics.
The Kannada news dataset contains only the headlines of news article in three categories: Entertainment, Tech, and Sports. The data set contains around 6300 news article headlines which collected from Kannada news websites. The data set has been cleaned and contains train and test set using which can be used to benchmark classification models in Kannada.
This new dataset is designed to extract intent from non-canonical directives which will help dialog managers extract intent from user dialog that may have no clear objective or are paraphrased forms of utterances.
Motion recognition is one of the basic cognitive capabilities of many life forms, yet identifying motion of physical entities in natural language have not been explored extensively and empirically. Literal-Motion-in-Text (LiMiT) dataset, is a large human-annotated collection of English text sentences describing physical occurrence of motion, with annotated physical entities in motion.
Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train an evaluation metric: LERC, a Learned Evaluation metric for Reading Comprehension, to mimic human judgement scores.
This dataset is a compilation of the OneStopEnglish corpus of texts written at three reading levels into one file. Text documents are classified into three reading levels - ele, int, adv (Elementary, Intermediate and Advance). This dataset demonstrates its usefulness for through two applica-tions - automatic readability assessment and automatic text simplification. The corpus consists of 189 texts, each in three versions/reading levels (567 in total).
scb-mt-en-th-2020: A Large English-Thai Parallel Corpus The primary objective of our work is to build a large-scale English-Thai dataset for machine translation. We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources, namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents. Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner. We train machine translation models based on this dataset. Our models' performance are comparable to that of Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is included in the training data for both Thai-English and English-Thai translation. The dataset, pre-trained models, and source code to reproduce our work are available for public use.
The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8). The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather. For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios.
SETimes – A Parallel Corpus of English and South-East European Languages The corpus is based on the content published on the SETimes.com news portal. The news portal publishes “news and views from Southeast Europe” in ten languages: Bulgarian, Bosnian, Greek, English, Croatian, Macedonian, Romanian, Albanian and Serbian. This version of the corpus tries to solve the issues present in an older version of the corpus (published inside OPUS, described in the LREC 2010 paper by Francis M. Tyers and Murat Serdar Alperen). The following procedures were applied to resolve existing issues: - stricter extraction process – no HTML residues present - language identification on every non-English document – non-English online documents contain English material in case the article was not translated into that language - resolving encoding issues in Croatian and Serbian – diacritics were partially lost due to encoding errors – text was rediacritized.
An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web. This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl, the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks. This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.
This is the Tilde MODEL Corpus – Multilingual Open Data for European Languages. The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. The activities have been undertaken as part of the ODINE Open Data Incubator for Europe, which aims to support the next generation of digital businesses and fast-track the development of new products and services. The corpus includes the following parts: Tilde MODEL - EESC is a multilingual corpus compiled from document texts of European Economic and Social Committee document portal. Source: http://dm.eesc.europa.eu/ Tilde MODEL - RAPID multilingual parallel corpus is compiled from all press releases of Press Release Database of European Commission released between 1975 and end of 2016 as available from http://europa.eu/rapid/ Tilde MODEL - ECB multilingual parallel corpus is compiled from the multilingual pages of European Central Bank web site http://ebc.europa.eu/ Tilde MODEL - EMA is a corpus compiled from texts of European Medicines Agency document portal as available in http://www.ema.europa.eu/ at the end of 2016 Tilde MODEL - World Bank is a corpus compiled from texts of World Bank as available in http://www.worldbank.org/ in 2017 Tilde MODEL - AirBaltic.com Travel Destinations is a multilingual parallel corpus compiled from description texts of AirBaltic.com travel destinations as available in https://www.airbaltic.com/en/destinations/ in 2017 Tilde MODEL - LiveRiga.com is a multilingual parallel corpus compiled from Riga tourist attractions description texts of http://liveriga.com/ web site in 2017 Tilde MODEL - Lithuanian National Philharmonic Society is a parallel corpus compiled from texts of Lithuanian National Philharmonic Society web site http://www.filharmonija.lt/ in 2017 Tilde MODEL - mupa.hu is a parallel corpus from texts of Müpa Budapest - web site of Hungarian national culture house and concert venue https://www.mupa.hu/en/ compiled in spring of 2017 Tilde MODEL - fold.lv is a parallel corpus from texts of fold.lv portal http://www.fold.lv/en/ of the best of Latvian and foreign creative industries as compiled in spring of 2017 Tilde MODEL - czechtourism.com is a multilingual parallel corpus from texts of http://czechtourism.com/ portal compiled in spring of 2017 30 languages, 274 bitexts total number of files: 125 total number of tokens: 1.43G total number of sentence fragments: 62.44M
WikiAsp is a multi-domain, aspect-based summarization dataset in the encyclopedic domain. In this task, models are asked to summarize cited reference documents of a Wikipedia article into aspect-based summaries. Each of the 20 domains include 10 domain-specific pre-defined aspects.
A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages. An atomic edit is defined as an edit e applied to a natural language expression S as the insertion, deletion, or substitution of a sub-expression P such that both the original expression S and the resulting expression e(S) are well-formed semantic constituents (MacCartney, 2009). In this corpus, we release such atomic insertions and deletions made to sentences in wikipedia.
XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to cross-lingual natural language understanding and generation. The benchmark is composed of the following 11 tasks: - NER - POS Tagging (POS) - News Classification (NC) - MLQA - XNLI - PAWS-X - Query-Ad Matching (QADSM) - Web Page Ranking (WPR) - QA Matching (QAM) - Question Generation (QG) - News Title Generation (NTG) For more information, please take a look at https://microsoft.github.io/XGLUE/.
XQuAD-R is a retrieval version of the XQuAD dataset (a cross-lingual extractive QA dataset). Like XQuAD, XQUAD-R is an 11-way parallel dataset, where each question appears in 11 different languages and has 11 parallel correct answers across the languages.