An Urdu text corpus for machine learning, natural language processing and linguistic analysis.
Edit Datasets filters
MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages. Named entities are phrases that contain the
The MBPP (Mostly Basic Python Problems) dataset consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, coverin
MedHop is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining info
A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialo
Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and analyzing natural language understanding systems specifically designed for spoken l
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microso
The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed an
Translator Human Parity Data Human evaluation results and translation output for the Translator Human Parity Data release, as described in https://blogs.microsoft.com/ai/mach
MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, Relation
Beans is a dataset of images of beans taken in the field using smartphone cameras. It consists of 3 classes: 2 disease classes and the healthy class. Diseases depicted include
This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level
The development of linguistic resources for use in natural language processingis of utmost importance for the continued growth of research anddevelopment in the field, especia
A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features.
Raw part of NLU Evaluation Data. It contains 25 715 non-empty examples (original dataset has 25716 examples) from 68 unique intents belonging to 18 scenarios.
The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.
This is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Unive
RF is a tiny parallel corpus of the Declarations of the Swedish Government and its translations.
Large pre-trained language models have shown promise for few-shot learning, completing text-based tasks given only a few task-specific examples. Will models soon solve classif
ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts This dataset contains the developed parallel corpus from the open access Google Patents dataset in 7
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). The questions are mined using Google auto-complete, their answers and the
PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. This dataset contains 23,659 human translated PAWS evaluation pairs an
Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on j
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB form
A Persian textual entailment task (deciding `sent1` entails `sent2`).
A Persian query paraphrasing task (paraphrase or not, given two questions). The questions are partly mined using Google auto-complete, and partly translated from Quora paraph
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). The questions are mined using Google auto-complete, their answers and th
A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment).