id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
nell
false
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100M<n<1B", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "relation-extraction", "text-to-structured", "text-to-tabular" ]
This dataset provides version 1115 of the belief extracted by CMU's Never Ending Language Learner (NELL) and version 1110 of the candidate belief extracted by NELL. See http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information extraction system that attempts to read the Clueweb09 of 500 million web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general web searches. The dataset has 4 configurations: nell_belief, nell_candidate, nell_belief_sentences, and nell_candidate_sentences. nell_belief is certainties of belief are lower. The two sentences config extracts the CPL sentence patterns filled with the applicable 'best' literal string for the entities filled into the sentence patterns. And also provides sentences found using web searches containing the entities and relationships. There are roughly 21M entries for nell_belief_sentences, and 100M sentences for nell_candidate_sentences.
665
2
neural_code_search
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:1908.09804" ]
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs and a search corpus consisting of code snippets collected from the most popular Android repositories on GitHub.
1,394
4
news_commentary
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:cs", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:ja", "language:nl", "language:pt", "language:ru", "language:zh", "license:unknown" ]
A parallel corpus of News Commentaries provided by WMT for training SMT. The source is taken from CASMACAT: http://www.casmacat.eu/corpus/news-commentary.html 12 languages, 63 bitexts total number of files: 61,928 total number of tokens: 49.66M total number of sentence fragments: 1.93M
8,609
9
newsgroup
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.
12,001
4
newsph
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:fil", "language:tl", "license:gpl-3.0", "arxiv:2010.11574" ]
Large-scale dataset of Filipino news articles. Sourced for the NewsPH-NLI Project (Cruz et al., 2020).
291
1
newsph_nli
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tl", "license:unknown", "arxiv:2010.11574" ]
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
269
0
newspop
false
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "social-media-shares-prediction", "arxiv:1801.07055" ]
This is a large data set of news items and their respective social feedback on multiple platforms: Facebook, Google+ and LinkedIn. The collected data relates to a period of 8 months, between November 2015 and July 2016, accounting for about 100,000 news items on four different topics: economy, microsoft, obama and palestine. This data set is tailored for evaluative comparisons in predictive analytics tasks, although allowing for tasks in other research areas such as topic detection and tracking, sentiment analysis in short text, first story detection or news recommendation.
963
2
newsqa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit" ]
NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.
748
2
newsroom
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:other" ]
NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. Dataset features includes: - text: Input news text. - summary: Summary for the news. And additional features: - title: news title. - url: url of the news. - date: date of the article. - density: extractive density. - coverage: extractive coverage. - compression: compression ratio. - density_bin: low, medium, high. - coverage_bin: extractive, abstractive. - compression_bin: low, medium, high. This dataset can be downloaded upon requests. Unzip all the contents "train.jsonl, dev.josnl, test.jsonl" to the tfds folder.
384
4
nkjp-ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:gpl-3.0" ]
The NKJP-NER is based on a human-annotated part of National Corpus of Polish (NKJP). We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.
271
0
nli_tr
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|snli", "source_datasets:extended|multi_nli", "language:tr", "license:cc-by-3.0", "license:cc-by-4.0", "license:cc-by-sa-3.0", "license:mit", "license:other" ]
\ The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale datasets that were obtained by translating the foundational NLI corpora (SNLI and MNLI) using Amazon Translate.
488
4
nlu_evaluation_data
false
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1903.05566" ]
Raw part of NLU Evaluation Data. It contains 25 715 non-empty examples (original dataset has 25716 examples) from 68 unique intents belonging to 18 scenarios.
877
4
norec
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:nb", "language:nn", "language:no", "license:cc-by-nc-4.0" ]
NoReC was created as part of the SANT project (Sentiment Analysis for Norwegian Text), a collaboration between the Language Technology Group (LTG) at the Department of Informatics at the University of Oslo, the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media. This first release of the corpus comprises 35,194 reviews extracted from eight different news sources: Dagbladet, VG, Aftenposten, Bergens Tidende, Fædrelandsvennen, Stavanger Aftenblad, DinSide.no and P3.no. In terms of publishing date the reviews mainly cover the time span 2003–2017, although it also includes a handful of reviews dating back as far as 1998.
137
0
norne
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:other", "arxiv:1911.12146" ]
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
153
1
norwegian_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:unknown" ]
Named entities Recognition dataset for Norwegian. It is a version of the Universal Dependency (UD) Treebank for both Bokmål and Nynorsk (UDN) where all proper nouns have been tagged with their type according to the NER tagging scheme. UDN is a converted version of the Norwegian Dependency Treebank into the UD scheme.
219
0
nq_open
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|natural_questions", "language:en", "license:cc-by-sa-3.0" ]
The NQ-Open task, introduced by Lee et.al. 2019, is an open domain question answering benchmark that is derived from Natural Questions. The goal is to predict an English answer string for an input English question. All questions can be answered using the contents of English Wikipedia.
3,335
0
nsmc
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ko", "license:cc-by-2.0" ]
This is a movie review dataset in the Korean language. Reviews were scraped from Naver movies. The dataset construction is based on the method noted in Large movie review dataset from Maas et al., 2011.
2,176
3
numer_sense
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other", "language:en", "license:mit", "arxiv:2005.00683" ]
NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. We propose to study whether numerical commonsense knowledge can be induced from pre-trained language models like BERT, and to what extent this access to knowledge robust against adversarial examples is. We hope this will be beneficial for tasks such as knowledge base completion and open-domain question answering.
1,012
1
numeric_fused_head
false
[ "task_categories:token-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "fused-head-identification" ]
Fused Head constructions are noun phrases in which the head noun is missing and is said to be "fused" with its dependent modifier. This missing information is implicit and is important for sentence understanding.The missing heads are easily filled in by humans, but pose a challenge for computational models. For example, in the sentence: "I bought 5 apples but got only 4.", 4 is a Fused-Head, and the missing head is apples, which appear earlier in the sentence. This is a crowd-sourced dataset of 10k numerical fused head examples (1M tokens).
401
1
oclar
false
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:unknown" ]
The researchers of OCLAR Marwan et al. (2019), they gathered Arabic costumer reviews from Google reviewsa and Zomato website (https://www.zomato.com/lebanon) on wide scope of domain, including restaurants, hotels, hospitals, local shops, etc.The corpus finally contains 3916 reviews in 5-rating scale. For this research purpose, the positive class considers rating stars from 5 to 3 of 3465 reviews, and the negative class is represented from values of 1 and 2 of about 451 texts.
269
1
offcombr
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "hate-speech-detection" ]
OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web.
400
2
offenseval2020_tr
false
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:tr", "license:cc-by-2.0", "offensive-language-classification" ]
OffensEval-TR 2020 is a Turkish offensive language corpus. The corpus consist of randomly sampled tweets and annotated in a similar way to OffensEval and GermEval.
464
3
offenseval_dravidian
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:kn", "language:ml", "language:ta", "license:cc-by-4.0", "offensive-language" ]
Offensive language identification in dravidian lanaguages dataset. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English, Malayalam-English, and Kannada-English)) collected from social media.
533
2
ofis_publik
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:br", "language:fr", "license:unknown" ]
Texts from the Ofis Publik ar Brezhoneg (Breton Language Board) provided by Francis Tyers 2 languages, total number of files: 278 total number of tokens: 2.12M total number of sentence fragments: 0.13M
268
0
ohsumed
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nc-4.0" ]
The OHSUMED test collection is a set of 348,566 references from MEDLINE, the on-line medical information database, consisting of titles and/or abstracts from 270 medical journals over a five-year period (1987-1991). The available fields are title, abstract, MeSH indexing terms, author, source, and publication type.
518
0
ollie
false
[ "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:other", "relation-extraction", "text-to-structured" ]
The Ollie dataset includes two configs for the data used to train the Ollie informatation extraction algorithm, for 18M sentences and 3M sentences respectively. This data is for academic use only. From the authors: Ollie is a program that automatically identifies and extracts binary relationships from English sentences. Ollie is designed for Web-scale information extraction, where target relations are not specified in advance. Ollie is our second-generation information extraction system . Whereas ReVerb operates on flat sequences of tokens, Ollie works with the tree-like (graph with only small cycles) representation using Stanford's compression of the dependencies. This allows Ollie to capture expression that ReVerb misses, such as long-range relations. Ollie also captures context that modifies a binary relation. Presently Ollie handles attribution (He said/she believes) and enabling conditions (if X then). More information is available at the Ollie homepage: https://knowitall.github.io/ollie/
406
0
omp
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-nc-sa-4.0" ]
The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper. The data set contains the following data for each post: * Post ID * Article ID * Headline (max. 250 characters) * Main Body (max. 750 characters) * User ID (the user names used by the website have been re-mapped to new numeric IDs) * Time stamp * Parent post (replies give rise to tree-like discussion thread structures) * Status (online or deleted by a moderator) * Number of positive votes by other community members * Number of negative votes by other community members For each article, the data set contains the following data: * Article ID * Publishing date * Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1) * Title * Body Detailed descriptions of the post selection and annotation procedures are given in the paper. ## Annotated Categories Potentially undesirable content: * Sentiment (negative/neutral/positive) An important goal is to detect changes in the prevalent sentiment in a discussion, e.g., the location within the fora and the point in time where a turn from positive/neutral sentiment to negative sentiment takes place. * Off-Topic (yes/no) Posts which digress too far from the topic of the corresponding article. * Inappropriate (yes/no) Swearwords, suggestive and obscene language, insults, threats etc. * Discriminating (yes/no) Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content. Neutral content that requires a reaction: * Feedback (yes/no) Sometimes users ask questions or give feedback to the author of the article or the newspaper in general, which may require a reply/reaction. Potentially desirable content: * Personal Stories (yes/no) In certain fora, users are encouraged to share their personal stories, experiences, anecdotes etc. regarding the respective topic. * Arguments Used (yes/no) It is desirable for users to back their statements with rational argumentation, reasoning and sources.
533
1
onestop_english
false
[ "task_categories:text2text-generation", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:text-simplification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0" ]
This dataset is a compilation of the OneStopEnglish corpus of texts written at three reading levels into one file. Text documents are classified into three reading levels - ele, int, adv (Elementary, Intermediate and Advance). This dataset demonstrates its usefulness for through two applica-tions - automatic readability assessment and automatic text simplification. The corpus consists of 189 texts, each in three versions/reading levels (567 in total).
1,528
12
onestop_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:extended|onestop_english", "language:en", "license:cc-by-sa-4.0", "arxiv:2004.14797" ]
OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the [OneStopEnglish corpus](https://github.com/nishkalavallabhi/OneStopEnglishCorpus). Each article comes in three difficulty levels, Elementary, Intermediate and Advanced. Each paragraph is annotated with three multiple choice reading comprehension questions. The reading comprehension questions can be answered based on any of the three paragraph levels.
282
3
open_subtitles
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:original", "language:af", "language:ar", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:gl", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:ka", "language:kk", "language:ko", "language:lt", "language:lv", "language:mk", "language:ml", "language:ms", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:si", "language:sk", "language:sl", "language:sq", "language:sr", "language:sv", "language:ta", "language:te", "language:th", "language:tl", "language:tr", "language:uk", "language:ur", "language:vi", "language:zh", "license:unknown" ]
This is a new collection of translated movie subtitles from http://www.opensubtitles.org/. IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data! This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking. 62 languages, 1,782 bitexts total number of files: 3,735,070 total number of tokens: 22.10G total number of sentence fragments: 3.35G
1,484
15
openai_humaneval
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:mit", "code-generation", "arxiv:2107.03374" ]
The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.
38,002
35
openbookqa
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown" ]
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension. OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject.
15,855
6
openslr
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:af", "language:bn", "language:ca", "language:en", "language:es", "language:eu", "language:gl", "language:gu", "language:jv", "language:km", "language:kn", "language:ml", "language:mr", "language:my", "language:ne", "language:si", "language:st", "language:su", "language:ta", "language:te", "language:tn", "language:ve", "language:xh", "language:yo", "license:cc-by-sa-4.0" ]
OpenSLR is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. We intend to be a convenient place for anyone to put resources that they have created, so that they can be downloaded publicly.
4,092
7
openwebtext
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0" ]
An open-source replication of the WebText dataset from OpenAI.
301,689
111
opinosis
false
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:apache-2.0", "abstractive-summarization" ]
The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics. Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com.
287
1
opus100
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:extended", "language:af", "language:am", "language:an", "language:ar", "language:as", "language:az", "language:be", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:cs", "language:cy", "language:da", "language:de", "language:dz", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gu", "language:ha", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:ig", "language:is", "language:it", "language:ja", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:li", "language:lt", "language:lv", "language:mg", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:nb", "language:ne", "language:nl", "language:nn", "language:no", "language:oc", "language:or", "language:pa", "language:pl", "language:ps", "language:pt", "language:ro", "language:ru", "language:rw", "language:se", "language:sh", "language:si", "language:sk", "language:sl", "language:sq", "language:sr", "language:sv", "language:ta", "language:te", "language:tg", "language:th", "language:tk", "language:tr", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:wa", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "license:unknown", "arxiv:2004.11867" ]
OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English).OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k.
19,200
19
opus_books
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ca", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:fi", "language:fr", "language:hu", "language:it", "language:nl", "language:no", "language:pl", "language:pt", "language:ru", "language:sv", "license:unknown" ]
This is a collection of copyright free books aligned by Andras Farkas, which are available from http://www.farkastranslations.com/bilingual_books.php Note that the texts are rather dated due to copyright issues and that some of them are manually reviewed (check the meta-data at the top of the corpus files in XML). The source is multilingually aligned, which is available from http://www.farkastranslations.com/bilingual_books.php. In OPUS, the alignment is formally bilingual but the multilingual alignment can be recovered from the XCES sentence alignment files. Note also that the alignment units from the original source may include multi-sentence paragraphs, which are split and sentence-aligned in OPUS. All texts are freely available for personal, educational and research use. Commercial use (e.g. reselling as parallel books) and mass redistribution without explicit permission are not granted. Please acknowledge the source when using the data! 16 languages, 64 bitexts total number of files: 158 total number of tokens: 19.50M total number of sentence fragments: 0.91M
10,303
4
opus_dgt
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sh", "language:sk", "language:sl", "language:sv", "license:unknown" ]
A collection of translation memories provided by the JRC. Source: https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory 25 languages, 299 bitexts total number of files: 817,410 total number of tokens: 2.13G total number of sentence fragments: 113.52M
1,456
1
opus_dogc
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "language:es", "license:cc0-1.0" ]
This is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Universitat Oberta de Catalunya.
268
0
opus_elhuyar
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:es", "language:eu", "license:unknown" ]
Dataset provided by the foundation Elhuyar, which is having data in languages Spanish to Basque.
267
0
opus_euconst
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:sk", "language:sl", "language:sv", "license:unknown" ]
A parallel corpus collected from the European Constitution for 21 language.
27,880
2
opus_finlex
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown" ]
The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files.
267
0
opus_fiskmo
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown" ]
fiskmo, a massive parallel corpus for Finnish and Swedish.
267
0
opus_gnome
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:af", "language:am", "language:an", "language:ang", "language:ar", "language:as", "language:ast", "language:az", "language:bal", "language:be", "language:bem", "language:bg", "language:bn", "language:bo", "language:br", "language:brx", "language:bs", "language:ca", "language:crh", "language:cs", "language:csb", "language:cy", "language:da", "language:de", "language:dv", "language:dz", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fo", "language:fr", "language:fur", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:gu", "language:gv", "language:ha", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:ia", "language:id", "language:ig", "language:io", "language:is", "language:it", "language:ja", "language:jbo", "language:ka", "language:kg", "language:kk", "language:km", "language:kn", "language:ko", "language:kr", "language:ks", "language:ku", "language:ky", "language:la", "language:lg", "language:li", "language:lo", "language:lt", "language:lv", "language:mai", "language:mg", "language:mi", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:mus", "language:my", "language:nb", "language:nds", "language:ne", "language:nhn", "language:nl", "language:nn", "language:no", "language:nqo", "language:nr", "language:nso", "language:oc", "language:or", "language:os", "language:pa", "language:pl", "language:ps", "language:pt", "language:quz", "language:ro", "language:ru", "language:rw", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:st", "language:sv", "language:sw", "language:szl", "language:ta", "language:te", "language:tg", "language:th", "language:tk", "language:tl", "language:tr", "language:ts", "language:tt", "language:tyj", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:wa", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "license:unknown" ]
A parallel corpus of GNOME localization files. Source: https://l10n.gnome.org 187 languages, 12,822 bitexts total number of files: 113,344 total number of tokens: 267.27M total number of sentence fragments: 58.12M
1,458
0
opus_infopankki
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:en", "language:es", "language:et", "language:fa", "language:fi", "language:fr", "language:ru", "language:so", "language:sv", "language:tr", "language:zh", "license:unknown" ]
A parallel corpus of 12 languages, 66 bitexts.
8,839
1
opus_memat
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:xh", "license:unknown" ]
Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain.
266
1
opus_montenegrinsubs
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:cnr", "language:en", "license:unknown" ]
Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin
267
0
opus_openoffice
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:ja", "language:ru", "language:sv", "language:zh", "license:unknown" ]
A collection of documents from http://www.openoffice.org/.
3,814
1
opus_paracrawl
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:eu", "language:fi", "language:fr", "language:ga", "language:gl", "language:hr", "language:hu", "language:is", "language:it", "language:km", "language:ko", "language:lt", "language:lv", "language:mt", "language:my", "language:nb", "language:ne", "language:nl", "language:nn", "language:pl", "language:pt", "language:ro", "language:ru", "language:si", "language:sk", "language:sl", "language:so", "language:sv", "language:sw", "language:tl", "language:uk", "language:zh", "license:cc0-1.0" ]
Parallel corpora from Web Crawls collected in the ParaCrawl project. 42 languages, 43 bitexts total number of files: 59,996 total number of tokens: 56.11G total number of sentence fragments: 3.13G
1,469
3
opus_rf
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:sv", "license:unknown" ]
RF is a tiny parallel corpus of the Declarations of the Swedish Government and its translations.
1,449
0
opus_tedtalks
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:hr", "license:unknown" ]
This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. 2 languages, total number of files: 2 total number of tokens: 2.81M total number of sentence fragments: 0.17M
270
0
opus_ubuntu
false
[ "task_categories:translation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:ace", "language:af", "language:ak", "language:am", "language:an", "language:ang", "language:ar", "language:ary", "language:as", "language:ast", "language:az", "language:ba", "language:bal", "language:be", "language:bem", "language:ber", "language:bg", "language:bho", "language:bn", "language:bo", "language:br", "language:brx", "language:bs", "language:bua", "language:byn", "language:ca", "language:ce", "language:ceb", "language:chr", "language:ckb", "language:co", "language:crh", "language:cs", "language:csb", "language:cv", "language:cy", "language:da", "language:de", "language:dsb", "language:dv", "language:dz", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:ff", "language:fi", "language:fil", "language:fo", "language:fr", "language:frm", "language:frp", "language:fur", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:grc", "language:gu", "language:guc", "language:gv", "language:ha", "language:haw", "language:he", "language:hi", "language:hil", "language:hne", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:ia", "language:id", "language:ig", "language:io", "language:is", "language:it", "language:iu", "language:ja", "language:jbo", "language:jv", "language:ka", "language:kab", "language:kg", "language:kk", "language:kl", "language:km", "language:kn", "language:ko", "language:kok", "language:ks", "language:ksh", "language:ku", "language:kw", "language:ky", "language:la", "language:lb", "language:lg", "language:li", "language:lij", "language:lld", "language:ln", "language:lo", "language:lt", "language:ltg", "language:lv", "language:mai", "language:mg", "language:mh", "language:mhr", "language:mi", "language:miq", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:mus", "language:my", "language:nan", "language:nap", "language:nb", "language:nds", "language:ne", "language:nhn", "language:nl", "language:nn", "language:no", "language:nso", "language:ny", "language:oc", "language:om", "language:or", "language:os", "language:pa", "language:pam", "language:pap", "language:pl", "language:pms", "language:pmy", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:rom", "language:ru", "language:rw", "language:sa", "language:sc", "language:sco", "language:sd", "language:se", "language:shn", "language:shs", "language:si", "language:sk", "language:sl", "language:sm", "language:sml", "language:sn", "language:so", "language:son", "language:sq", "language:sr", "language:st", "language:sv", "language:sw", "language:syr", "language:szl", "language:ta", "language:te", "language:tet", "language:tg", "language:th", "language:ti", "language:tk", "language:tl", "language:tlh", "language:tr", "language:trv", "language:ts", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vec", "language:vi", "language:wa", "language:wae", "language:wo", "language:xal", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "language:zza", "license:bsd-3-clause" ]
A parallel corpus of Ubuntu localization files. Source: https://translations.launchpad.net 244 languages, 23,988 bitexts total number of files: 30,959 total number of tokens: 29.84M total number of sentence fragments: 7.73M
1,448
0
opus_wikipedia
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:bg", "language:cs", "language:de", "language:el", "language:en", "language:es", "language:fa", "language:fr", "language:he", "language:hu", "language:it", "language:nl", "language:pl", "language:pt", "language:ro", "language:ru", "language:sl", "language:tr", "language:vi", "license:unknown" ]
This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wołk and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014 20 languages, 36 bitexts total number of files: 114 total number of tokens: 610.13M total number of sentence fragments: 25.90M
842
1
opus_xhosanavy
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:xh", "license:unknown" ]
This dataset is designed for machine translation from English to Xhosa.
268
2
orange_sum
false
[ "task_categories:summarization", "task_ids:news-articles-headline-generation", "task_ids:news-articles-summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fr", "license:unknown", "arxiv:2010.12321" ]
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous. Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
672
2
oscar
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:100M<n<1B", "size_categories:10K<n<100K", "size_categories:10M<n<100M", "size_categories:1K<n<10K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:original", "language:af", "language:als", "language:am", "language:an", "language:ar", "language:arz", "language:as", "language:ast", "language:av", "language:az", "language:azb", "language:ba", "language:bar", "language:bcl", "language:be", "language:bg", "language:bh", "language:bn", "language:bo", "language:bpy", "language:br", "language:bs", "language:bxr", "language:ca", "language:cbk", "language:ce", "language:ceb", "language:ckb", "language:cs", "language:cv", "language:cy", "language:da", "language:de", "language:diq", "language:dsb", "language:dv", "language:el", "language:eml", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:frr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:gom", "language:gu", "language:he", "language:hi", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:ilo", "language:io", "language:is", "language:it", "language:ja", "language:jbo", "language:jv", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:krc", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lb", "language:lez", "language:li", "language:lmo", "language:lo", "language:lrc", "language:lt", "language:lv", "language:mai", "language:mg", "language:mhr", "language:min", "language:mk", "language:ml", "language:mn", "language:mr", "language:mrj", "language:ms", "language:mt", "language:mwl", "language:my", "language:myv", "language:mzn", "language:nah", "language:nap", "language:nds", "language:ne", "language:new", "language:nl", "language:nn", "language:no", "language:oc", "language:or", "language:os", "language:pa", "language:pam", "language:pl", "language:pms", "language:pnb", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:ru", "language:sa", "language:sah", "language:scn", "language:sd", "language:sh", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:su", "language:sv", "language:sw", "language:ta", "language:te", "language:tg", "language:th", "language:tk", "language:tl", "language:tr", "language:tt", "language:tyv", "language:ug", "language:uk", "language:ur", "language:uz", "language:vec", "language:vi", "language:vo", "language:wa", "language:war", "language:wuu", "language:xal", "language:xmf", "language:yi", "language:yo", "language:yue", "language:zh", "license:cc0-1.0", "arxiv:2010.14571" ]
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\
55,034
78
para_crawl
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10M<n<100M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:cc0-1.0" ]
null
3,319
5
para_pat
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:translation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "language:de", "language:el", "language:en", "language:es", "language:fr", "language:hu", "language:ja", "language:ko", "language:pt", "language:ro", "language:ru", "language:sk", "language:uk", "language:zh", "license:cc-by-4.0" ]
ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts This dataset contains the developed parallel corpus from the open access Google Patents dataset in 74 language pairs, comprising more than 68 million sentences and 800 million tokens. Sentences were automatically aligned using the Hunalign algorithm for the largest 22 language pairs, while the others were abstract (i.e. paragraph) aligned.
2,782
5
parsinlu_reading_comprehension
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia|google", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154" ]
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
269
0
pass
false
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|yffc100M", "language:en", "license:cc-by-4.0", "image-self-supervised pretraining", "arxiv:2109.13228" ]
PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. The PASS images are sourced from the YFCC-100M dataset.
268
1
paws-x
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:extended|other-paws", "language:de", "language:en", "language:es", "language:fr", "language:ja", "language:ko", "language:zh", "license:other", "paraphrase-identification", "arxiv:1908.11828" ]
PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. English language is available by default. All translated pairs are sourced from examples in PAWS-Wiki. For further details, see the accompanying paper: PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification (https://arxiv.org/abs/1908.11828) NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1.
17,688
9
paws
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "paraphrase-identification", "arxiv:1904.01130" ]
PAWS: Paraphrase Adversaries from Word Scrambling This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset. For further details, see the accompanying paper: PAWS: Paraphrase Adversaries from Word Scrambling (https://arxiv.org/abs/1904.01130) PAWS-QQP is not available due to license of QQP. It must be reconstructed by downloading the original data and then running our scripts to produce the data and attach the labels. NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1.
21,905
13
pec
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-retrieval", "task_ids:dialogue-modeling", "task_ids:utterance-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:gpl-3.0" ]
\ A dataset of around 350K persona-based empathetic conversations. Each speaker is associated with a persona, which comprises multiple persona sentences. The response of each conversation is empathetic.
533
2
allenai/peer_read
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "acceptability-classification", "arxiv:1804.09635" ]
PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers.
426
2
peoples_daily_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:unknown" ]
People's Daily NER Dataset is a commonly used dataset for Chinese NER, with text from People's Daily (人民日报), the largest official newspaper. The dataset is in BIO scheme. Entity types are: PER (person), ORG (organization) and LOC (location).
663
4
per_sent
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-MPQA-KBP Challenge-MediaRank", "language:en", "license:unknown", "arxiv:2011.06128" ]
Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities. The dataset consists of sentiment annotations on news articles about people. For each article, annotators judge what the author’s sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article. To split the dataset, entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard -- `test_random`), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent -- `test_fixed`).
269
0
persian_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fa", "license:cc-by-4.0" ]
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format.
553
0
pg19
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1911.05507" ]
This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919. It also contains metadata of book titles and publication dates. PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark. Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date). Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text. To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table. One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
402
7
php
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "language:de", "language:en", "language:es", "language:fi", "language:fr", "language:he", "language:hu", "language:it", "language:ja", "language:ko", "language:nl", "language:pl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sl", "language:sv", "language:tr", "language:tw", "language:zh", "license:unknown" ]
A parallel corpus originally extracted from http://se.php.net/download-docs.php. The original documents are written in English and have been partly translated into 21 languages. The original manuals contain about 500,000 words. The amount of actually translated texts varies for different languages between 50,000 and 380,000 words. The corpus is rather noisy and may include parts from the English original in some of the translations. The corpus is tokenized and each language pair has been sentence aligned. 23 languages, 252 bitexts total number of files: 71,414 total number of tokens: 3.28M total number of sentence fragments: 1.38M
796
0
etalab-ia/piaf
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:mit" ]
Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia.
279
6
pib
false
[ "task_categories:translation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:bn", "language:en", "language:gu", "language:hi", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "language:ur", "license:cc-by-4.0", "arxiv:2008.04860" ]
Sentence aligned parallel corpus between 11 Indian Languages, crawled and extracted from the press information bureau website.
7,392
3
piqa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1911.11641", "arxiv:1907.10641", "arxiv:1904.09728", "arxiv:1808.05326" ]
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Physical commonsense knowledge is a major challenge on the road to true AI-completeness, including robots that interact with the world and understand natural language. PIQA focuses on everyday situations with a preference for atypical solutions. The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft, bake, or manipulate objects using everyday materials. The underlying task is formualted as multiple choice question answering: given a question `q` and two possible solutions `s1`, `s2`, a model or a human must choose the most appropriate solution, of which exactly one is correct. The dataset is further cleaned of basic artifacts using the AFLite algorithm which is an improvement of adversarial filtering. The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
595,457
14
pn_summary
false
[ "task_categories:summarization", "task_categories:text-classification", "task_ids:news-articles-summarization", "task_ids:news-articles-headline-generation", "task_ids:text-simplification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fa", "license:mit", "arxiv:2012.11204" ]
A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification. It is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace("[n]", "\n")`) and then use them for your purposes.
293
3
poem_sentiment
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2011.02686" ]
Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg. This dataset can be used for tasks such as sentiment classification or style transfer for poems.
2,774
7
polemo2
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause" ]
The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
399
0
poleval2019_cyberbullying
false
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:unknown" ]
In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and related phenomena. In Task 6-2, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research will be summarized in an introductory paper for the task, however, the main and definitive condition to 1 distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech).
443
0
poleval2019_mt
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:expert-generated", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:pl", "language:ru", "license:unknown" ]
PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish.Submitted solutions compete against one another within certain tasks selected by organizers, using available data and are evaluated according topre-established procedures. One of the tasks in PolEval-2019 was Machine Translation (Task-4). The task is to train as good as possible machine translation system, using any technology,with limited textual resources.The competition will be done for 2 language pairs, more popular English-Polish (into Polish direction) and pair that can be called low resourcedRussian-Polish (in both directions). Here, Polish-English is also made available to allow for training in both directions. However, the test data is ONLY available for English-Polish.
668
0
polsum
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:pl", "license:cc-by-3.0" ]
Polish Summaries Corpus: the corpus of Polish news summaries.
272
0
polyglot_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:ar", "language:bg", "language:ca", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fa", "language:fi", "language:fr", "language:he", "language:hi", "language:hr", "language:hu", "language:id", "language:it", "language:ja", "language:ko", "language:lt", "language:lv", "language:ms", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sl", "language:sr", "language:sv", "language:th", "language:tl", "language:tr", "language:uk", "language:vi", "language:zh", "license:unknown" ]
Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition. The details of the procedure of generating them is outlined in Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data corresponding to a different language. For example, "es" includes only spanish examples.
6,761
7
prachathai67k
false
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
288
1
pragmeval
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown" ]
Evaluation of language understanding with a 11 datasets benchmark focusing on discourse and pragmatics
3,101
3
proto_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2005.00771" ]
This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer. Each line is a json dictionary, in which: 1. question - contains the question (in original and a normalized form) 2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight. 3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.
533
1
psc
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-3.0" ]
The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives.
272
1
ptb_text_only
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other" ]
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.
16,135
6
pubmed
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:text-scoring", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:other", "citation-estimation" ]
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
498
18
pubmed_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "arxiv:1909.06146" ]
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions.
11,727
22
py_ast
false
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:fill-mask", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "license:bsd-2-clause", "license:mit", "code-modeling", "code-generation" ]
Dataset consisting of parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we aim to remove obfuscated files
275
3
qa4mre
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "language:bg", "language:de", "language:en", "language:es", "language:it", "language:ro", "license:unknown" ]
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in question answering and reading comprehension. The dataset contains a supporting passage and a set of questions corresponding to the passage. Multiple options for answers are provided for each question, of which only one is correct. The training and test datasets are available for the main track. Additional gold standard documents are available for two pilot studies: one on alzheimers data, and the other on entrance exams data.
3,542
2
qa_srl
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. There were 2 datsets used in the paper, newswire and wikipedia. Unfortunately the newswiredataset is built from CoNLL-2009 English training set that is covered under license Thus, we are providing only Wikipedia training set here. Please check README.md for more details on newswire dataset. For the Wikipedia domain, randomly sampled sentences from the English Wikipedia (excluding questions and sentences with fewer than 10 or more than 60 words) were taken. This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
1,285
1
qa_zre
false
[ "task_categories:question-answering", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "zero-shot-relation-extraction" ]
A dataset reducing relation extraction to simple reading comprehension questions
844
1
qangaroo
false
[ "language:en" ]
We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference. Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps. Our aim is to build Reading Comprehension methods that perform multi-hop inference on text, where individual facts are spread out across different documents. The two QAngaroo datasets provide a training and evaluation resource for such methods.
677
0
qanta
false
[ "task_categories:question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "quizbowl", "arxiv:1904.04792" ]
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
670
1
qasc
false
[ "task_categories:question-answering", "task_categories:multiple-choice", "task_ids:extractive-qa", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1910.11473" ]
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
11,678
1
allenai/qasper
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|s2orc", "language:en", "license:cc-by-4.0", "arxiv:2105.03011" ]
A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.
396
21
qed
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|natural_questions", "language:en", "license:unknown", "explanations-in-question-answering", "arxiv:2009.06354" ]
QED, is a linguistically informed, extensible framework for explanations in question answering. A QED explanation specifies the relationship between a question and answer according to formal semantic notions such as referential equality, sentencehood, and entailment. It is an expertannotated dataset of QED explanations built upon a subset of the Google Natural Questions dataset.
1,148
1
qed_amara
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:aa", "language:ab", "language:ae", "language:aeb", "language:af", "language:ak", "language:am", "language:an", "language:ar", "language:arq", "language:arz", "language:as", "language:ase", "language:ast", "language:av", "language:ay", "language:az", "language:ba", "language:be", "language:ber", "language:bg", "language:bh", "language:bi", "language:bm", "language:bn", "language:bnt", "language:bo", "language:br", "language:bs", "language:bug", "language:ca", "language:ce", "language:ceb", "language:ch", "language:cho", "language:cku", "language:cnh", "language:co", "language:cr", "language:cs", "language:cu", "language:cv", "language:cy", "language:da", "language:de", "language:dv", "language:dz", "language:ee", "language:efi", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:ff", "language:fi", "language:fil", "language:fj", "language:fo", "language:fr", "language:ga", "language:gd", "language:gl", "language:gn", "language:gu", "language:ha", "language:hai", "language:haw", "language:haz", "language:hch", "language:he", "language:hi", "language:ho", "language:hr", "language:ht", "language:hu", "language:hup", "language:hus", "language:hy", "language:hz", "language:ia", "language:id", "language:ie", "language:ig", "language:ik", "language:inh", "language:io", "language:iro", "language:is", "language:it", "language:iu", "language:ja", "language:jv", "language:ka", "language:kar", "language:ki", "language:kj", "language:kk", "language:kl", "language:km", "language:kn", "language:ko", "language:kr", "language:ksh", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lb", "language:lg", "language:li", "language:lkt", "language:lld", "language:ln", "language:lo", "language:lt", "language:ltg", "language:lu", "language:luo", "language:luy", "language:lv", "language:mad", "language:mfe", "language:mg", "language:mi", "language:mk", "language:ml", "language:mn", "language:mni", "language:moh", "language:mos", "language:mr", "language:ms", "language:mt", "language:mus", "language:my", "language:nb", "language:nci", "language:nd", "language:ne", "language:nl", "language:nn", "language:nso", "language:nv", "language:ny", "language:oc", "language:om", "language:or", "language:pa", "language:pam", "language:pap", "language:pi", "language:pl", "language:pnb", "language:prs", "language:ps", "language:pt", "language:qu", "language:rm", "language:rn", "language:ro", "language:ru", "language:rup", "language:rw", "language:sa", "language:sc", "language:scn", "language:sco", "language:sd", "language:sg", "language:sgn", "language:sh", "language:si", "language:sk", "language:sl", "language:sm", "language:sn", "language:so", "language:sq", "language:sr", "language:st", "language:sv", "language:sw", "language:szl", "language:ta", "language:te", "language:tet", "language:tg", "language:th", "language:ti", "language:tk", "language:tl", "language:tlh", "language:to", "language:tr", "language:ts", "language:tt", "language:tw", "language:ug", "language:uk", "language:umb", "language:ur", "language:uz", "language:ve", "language:vi", "language:vls", "language:vo", "language:wa", "language:wo", "language:xh", "language:yaq", "language:yi", "language:yo", "language:za", "language:zam", "language:zh", "language:zu", "license:unknown" ]
The QCRI Educational Domain Corpus (formerly QCRI AMARA Corpus) is an open multilingual collection of subtitles for educational videos and lectures collaboratively transcribed and translated over the AMARA web-based platform. Developed by: Qatar Computing Research Institute, Arabic Language Technologies Group The QED Corpus is made public for RESEARCH purpose only. The corpus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copyright Qatar Computing Research Institute. All rights reserved. 225 languages, 9,291 bitexts total number of files: 271,558 total number of tokens: 371.76M total number of sentence fragments: 30.93M
794
2
quac
false
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikipedia", "language:en", "license:mit", "arxiv:1808.07036" ]
Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.
1,072
4
quail
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.\
14,397
1
quarel
false
[ "language:en" ]
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
8,054
0
quartz
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
QuaRTz is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The dataset is split into train (2696), dev (384) and test (784). A background sentence will only appear in a single split.
11,616
2