This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized la
CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identif
The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at d
This dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy
Large Movie translated Urdu Reviews Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We prov
mLAMA: a multilingual version of the LAMA benchmark (T-REx and GoogleRE) covering 53 languages.
An Urdu text corpus for machine learning, natural language processing and linguistic analysis.
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 datas
The Prime Minister's speeches - Mann Ki Baat, on All India Radio, translated into many languages.
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microso
This is a new collection of translated movie subtitles from http://www.opensubtitles.org/. IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.ope
OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English).OPUS-100
A parallel corpus of GNOME localization files. Source: https://l10n.gnome.org 187 languages, 12,822 bitexts total number of files: 113,344 total number of tokens: 267.27M tot
A parallel corpus of Ubuntu localization files. Source: https://translations.launchpad.net 244 languages, 23,988 bitexts total number of files: 30,959 total number of tokens:
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy arch
This new dataset is the large scale sentence aligned corpus in 11 Indian languages, viz. CVIT-PIB corpus that is the largest multilingual corpus available for Indian languages
The QCRI Educational Domain Corpus (formerly QCRI AMARA Corpus) is an open multilingual collection of subtitles for educational videos and lectures collaboratively transcribed
This is an extensive compilation of Roman Urdu Dataset (Urdu written in Latin/Roman script) tagged for sentiment analysis.
This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the link
This is a collection of Quran translations compiled by the Tanzil project The translations provided at this page are for non-commercial purposes only. If used otherwise, you n
A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim
This is a collection of translated sentences from Tatoeba 359 languages, 3,403 bitexts total number of files: 750 total number of tokens: 65.54M total number of sentence fragm
The core of WIT3 is the TED Talks corpus, that basically redistributes the original content published by the TED Conference website (http://www.ted.com). Since 2007, the TED C
The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgroun
UMC005 English-Urdu is a parallel corpus of texts in English and Urdu language with sentence alignments. The corpus can be used for experiments with statistical machine transl
Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual pa
The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages. The goal of UniMorph is to annota
Urdu fake news datasets that contain news of 5 different news domains. These domains are Sports, Health, Technology, Entertainment, and Business. The real news are collected b
“Urdu Sentiment Corpus” (USC) shares the dat of Urdu tweets for the sentiment analysis and polarity detection. The dataset is consisting of tweets and overall, the dataset is
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (orga