Back to home
All Datasets 🏷
All datasets from our datasets repository and community bucket.
Also check out the list of supported Metrics 📉.
52 results
ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug. DRUG-AE.rel provides relations between drugs and adverse effects. DRUG-DOSE.rel provides relations between drugs and dosages. ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.
The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).
A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.
The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at document level with three levels of reuse: wholly derived, partially derived and non derived.
ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: Ethos_Dataset_Binary: contains 998 comments in the dataset alongside with a label about hate speech presence or absence. 565 of them do not contain hate speech, while the rest of them, 433, contain. Ethos_Dataset_Multi_Label: which contains 8 labels for the 433 comments with hate speech content. These labels are violence (if it incites (1) or not (0) violence), directed_vs_general (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, gender, race, national_origin, disability, religion and sexual_orientation.
The corpora comprise of files per data provider that are encoded in the IOB format (Ramshaw & Marcus, 1995). The IOB format is a simple text chunking format that divides texts into single tokens per line, and, separated by a whitespace, tags to mark named entities. The most commonly used categories for tags are PER (person), LOC (location) and ORG (organization). To mark named entities that span multiple tokens, the tags have a prefix of either B- (beginning of named entity) or I- (inside of named entity). O (outside of named entity) tags are used to mark tokens that are not a named entity.
EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.
A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification. The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time. The FACTCK.BR dataset contains 1309 claims with its corresponding label.
Fake news has become a major societal issue and a technical challenge for social media companies to identify. This content is difficult to identify because the term "fake news" covers intentionally false, deceptive stories as well as factual errors, satire, and sometimes, stories that a person just does not like. Addressing the problem requires clear definitions and examples. In this work, we present a dataset of fake news and satire stories that are hand coded, verified, and, in the case of fake news, include rebutting stories. We also include a thematic content analysis of the articles, identifying major themes that include hyperbolic support or condemnation of a gure, conspiracy theories, racist themes, and discrediting of reliable sources. In addition to releasing this dataset for research use, we analyze it and show results based on language that are promising for classification purposes. Overall, our contribution of a dataset and initial analysis are designed to support future work by fake news researchers.
Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts, from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set, a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event, Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date). It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type". The dataset version processed here ONLY USE the "Category" level of the original dataset. [1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese." Brazilian Conference on Intelligent Systems. Springer, Cham, 2020.
HindEnCorp parallel texts (sentence-aligned) come from the following sources: Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi. EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages. Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.  For the current release, we are extending the parallel corpus using these sources: Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi. TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available. The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus. is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files. Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.
Machine learning models are trained to find patterns in data. NLP models can inadvertently learn socially undesirable patterns when training on gender biased text. In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information. In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites. Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers. We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.
A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example, DHF can be disambiguated to dihydrofolate, diastolic heart failure, dengue hemorragic fever or dihydroxyfumarate
MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification. The corpora are compiled from hotel reviews taken mainly from The corpora are in Kaf/Naf format, which is an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the guidelines set out in the OpeNER project.
Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend.
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs and a search corpus consisting of code snippets collected from the most popular Android repositories on GitHub.
This dataset is a compilation of the OneStopEnglish corpus of texts written at three reading levels into one file. Text documents are classified into three reading levels - ele, int, adv (Elementary, Intermediate and Advance). This dataset demonstrates its usefulness for through two applica-tions - automatic readability assessment and automatic text simplification. The corpus consists of 189 texts, each in three versions/reading levels (567 in total).
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
Propagandistic news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others. The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to detect them. We provide a permanent leaderboard to allow researchers both to advertise their progress and to be up-to-speed with the state of the art on the tasks offered (see below for a definition).
This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the links between them.
SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, and pos-tagging. The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. The related paper mentioned on the github page is and a related Medium post is .
The SOFC-Exp corpus consists of 45 open-access scholarly articles annotated by domain experts. A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested named entity recognition and slot filling tasks as well as high annotation quality is presented in the accompanying paper.
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s. The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the conversations and their participants.
This is the Tilde MODEL Corpus – Multilingual Open Data for European Languages. The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. The activities have been undertaken as part of the ODINE Open Data Incubator for Europe, which aims to support the next generation of digital businesses and fast-track the development of new products and services. The corpus includes the following parts: Tilde MODEL - EESC is a multilingual corpus compiled from document texts of European Economic and Social Committee document portal. Source: Tilde MODEL - RAPID multilingual parallel corpus is compiled from all press releases of Press Release Database of European Commission released between 1975 and end of 2016 as available from Tilde MODEL - ECB multilingual parallel corpus is compiled from the multilingual pages of European Central Bank web site Tilde MODEL - EMA is a corpus compiled from texts of European Medicines Agency document portal as available in at the end of 2016 Tilde MODEL - World Bank is a corpus compiled from texts of World Bank as available in in 2017 Tilde MODEL - Travel Destinations is a multilingual parallel corpus compiled from description texts of travel destinations as available in in 2017 Tilde MODEL - is a multilingual parallel corpus compiled from Riga tourist attractions description texts of web site in 2017 Tilde MODEL - Lithuanian National Philharmonic Society is a parallel corpus compiled from texts of Lithuanian National Philharmonic Society web site in 2017 Tilde MODEL - is a parallel corpus from texts of Müpa Budapest - web site of Hungarian national culture house and concert venue compiled in spring of 2017 Tilde MODEL - is a parallel corpus from texts of portal of the best of Latvian and foreign creative industries as compiled in spring of 2017 Tilde MODEL - is a multilingual parallel corpus from texts of portal compiled in spring of 2017 30 languages, 274 bitexts total number of files: 125 total number of tokens: 1.43G total number of sentence fragments: 62.44M
Thai Literature Corpora (TLC): Corpora of machine-ingestible Thai classical literature texts. Release: 6/25/19 It consists of two datasets: ## TLC set It is texts from [Vajirayana Digital Library](, stored by chapters and stanzas (non-tokenized). tlc v.2.0 (6/17/19 : a total of 34 documents, 292,270 lines, 31,790,734 characters) tlc v.1.0 (6/11/19 : a total of 25 documents, 113,981 lines, 28,775,761 characters) ## TNHC set It is texts from Thai National Historical Corpus, stored by lines (manually tokenized). tnhc v.1.0 (6/25/19 : a total of 47 documents, 756,478 lines, 13,361,142 characters)
A translation of the word pair similarity dataset wordsim-353 to Twi. The dataset was presented in the paper Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi (LREC 2020).
The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the first time, fundamental human rights to be universally protected. The Declaration was adopted by the UN General Assembly in Paris on 10 December 1948 during its 183rd plenary meeting. The dataset includes translations of the document in 464 languages and dialects. © 1996 – 2009 The Office of the High Commissioner for Human Rights This plain text version prepared by the “UDHR in Unicode” project,
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 176 of the 282 languages from the original WikiANN corpus.
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd: > The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators.
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
A multilingual fine-grained emotion dataset. The dataset consists of human annotated Finnish (25k) and English sentences (30k). Plutchik’s core emotions are used to annotate the dataset with the addition of neutral to create a multilabel multiclass dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to show that XED performs on par with other similar datasets and is therefore a useful tool for sentiment analysis and emotion detection.
A translation of the word pair similarity dataset wordsim-353 to Yorùbá. The dataset was presented in the paper Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi (LREC 2020).
Dataset built from pairs of YouTube captions where both 'auto-generated' and 'manually-corrected' captions are available for a single specified language. This dataset labels two-way (e.g. ignoring single-sided insertions) same-length token differences in the `diff_type` column. The `default_seq` is composed of tokens from the 'auto-generated' captions. When a difference occurs between the 'auto-generated' vs 'manually-corrected' captions types, the `correction_seq` contains tokens from the 'manually-corrected' captions.