The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb).
Wiki Question Answering corpus from Microsoft
Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus
Wikipedia version split into plain text snippets for dense semantic indexing.
2 languages, total number of files: 132 total number of tokens: 1.80M total number of sentence fragments: 78.36k
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia Google's WikiSplit dataset was constructed a
\ The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles
Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page fil
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (orga
The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with
WikiHow is a new large-scale dataset using the online WikiHow (http://www.wikihow.com/) knowledge base. There are two features: - text: wikihow answers texts. - headline:
Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each
A large crowd-sourced dataset for developing natural language interfaces for relational databases
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is
Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means
It is a benchmark dataset for language identification and contains 235000 paragraphs of 235 languages
WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias. The corpus contains Winograd-schema style sentences with entities corresponding to peopl
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires
WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and robustnes
The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 de
This dataset is designed to provide forward and reverse dictionary of Korean proverbs.
Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment category (positive, neutral, negative, question) * Released to public domain under Creative C
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (