Name
stringlengths
3
106
Subsets
list
HF Link
stringlengths
3
69
Link
stringlengths
24
135
License
stringclasses
28 values
Year
int32
2k
2.02k
Language
stringclasses
2 values
Dialect
stringclasses
18 values
Domain
stringclasses
10 values
Form
stringclasses
3 values
Collection Style
stringclasses
7 values
Description
stringlengths
16
1.64k
Volume
stringlengths
1
13
Unit
stringclasses
5 values
Ethical Risks
stringclasses
4 values
Provider
stringlengths
1
136
Derived From
stringlengths
2
307
Paper Title
stringlengths
3
143
Paper Link
stringlengths
3
285
Script
stringclasses
4 values
Tokenized
stringclasses
2 values
Host
stringclasses
21 values
Access
stringclasses
3 values
Cost
stringlengths
3
11
Test Split
stringclasses
3 values
Tasks
stringlengths
8
181
Venue Title
stringlengths
2
46
Citations
stringlengths
3
6
Venue Type
stringclasses
5 values
Venue Name
stringlengths
3
113
Authors
stringlengths
3
923
Affiliations
stringlengths
1
470
Abstract
stringlengths
3
2.15k
Added By
stringlengths
3
25
ArCOV19-Rumors
[]
https://huggingface.co/datasets/arbml/ArCOV19_claims
https://gitlab.com/bigirqu/ArCOV-19/-/tree/master/ArCOV19-Rumors
unknown
2,021
ar
mixed
social media
text
crawling and annotation(other)
The first Arabic dataset for rumors verification in Twitter
9,414
sentences
Medium
Qatar University
nan
ArCOV19-Rumors: Arabic COVID-19 Twitter Dataset for Misinformation Detection
https://aclanthology.org/2021.wanlp-1.8.pdf
Arab
No
GitLab
Free
nan
No
fact checking
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
nan
nan
nan
Fatima Haouari
SenWave
[]
nan
https://github.com/gitdevqiang/SenWave
unknown
2,020
multilingual
mixed
social media
text
crawling and annotation(other)
The largest fine-grained annotated Covid-19 tweets dataset
10,000
sentences
Medium
Multiple institutions
nan
SenWave: Monitoring the Global Sentiments under the COVID-19 Pandemic
https://arxiv.org/pdf/2006.10842.pdf
Arab
No
GitHub
Upon-Request
nan
No
emotion detection
arXiv
nan
preprint
nan
nan
nan
nan
Nora Alturayeif
OpenITI-proc
[]
nan
https://zenodo.org/record/2535593#.YWh7FS8RozU
CC BY 4.0
2,019
ar
ar-CLS: (Arabic (Classic))
other
text
crawling and annotation(other)
A linguistically annotated version of the OpenITI corpus, with annotations for lemmas, POS tags, parse trees, and morphological segmentation
1,500,000,000
tokens
Low
Multiple institutions
OpenITI
Studying the History of the Arabic Language: Language Technology and a Large-Scale Historical Corpus
https://arxiv.org/pdf/1809.03891.pdf
Arab-Latn
Yes
zenodo
Free
nan
No
text generation, language modeling
LRE
nan
journal
Language Resources and Evaluation
Yonatan Belinkov, Alexander Magidow, Alberto Barrón-Cedeño, Avi Shmidman, Maxim Romanov
nan
Arabic is a widely-spoken language with a long and rich history, but existing corpora and language technology focus mostly on modern Arabic and its varieties. Therefore, studying the history of the language has so far been mostly limited to manual analyses on a small scale. In this work, we present a large-scale historical corpus of the written Arabic language, spanning 1400 years. We describe our efforts to clean and process this corpus using Arabic NLP tools, including the identification of reused text. We study the history of the Arabic language using a novel automatic periodization algorithm, as well as other techniques. Our findings confirm the established division of written Arabic into Modern Standard and Classical Arabic, and confirm other established periodizations, while suggesting that written Arabic may be divisible into still further periods of development.
Yonatan Belinkov
APGC v2.0: Arabic Parallel Gender Corpus v2.0
[]
nan
https://camel.abudhabi.nyu.edu/arabic-parallel-gender-corpus/
custom
2,021
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
The Arabic Parallel Gender Corpus v2.0 (APGC v2.0) is designed to support research on gender bias and personalization in natural language processing applications working on Arabic. It expands on Habash et al. (2019)’s Arabic Parallel Gender Corpus (APGC v1.0) by adding 2nd person targets as well increasing the total number of sentences over 6.5 times, reaching over 590K words.
80,326
sentences
Medium
NYU Abu Dhabi
APGC v1.0
The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses
https://arxiv.org/pdf/2110.09216.pdf
Arab
Yes
CAMeL Resources
Upon-Request
nan
Yes
gender identification, gender rewriting
arXiv
nan
preprint
nan
Bashar Alhafni and Nizar Habash and Houda Bouamor.
New York University Abu Dhabi, Carnegie Mellon University in Qatar
Gender bias in natural language processing (NLP) applications, particularly machine translation, has been receiving increasing attention. Much of the research on this issue has focused on mitigating gender bias in English NLP models and systems. Addressing the problem in poorly resourced, and/or morphologically rich languages has lagged behind, largely due to the lack of datasets and resources. In this paper, we introduce a new corpus for gender identification and rewriting in contexts involving one or two target users (I and/or You) -- first and second grammatical persons with independent grammatical gender preferences. We focus on Arabic, a gender-marking morphologically rich language. The corpus has multiple parallel components: four combinations of 1st and 2nd person in feminine and masculine grammatical genders, as well as English, and English to Arabic machine translation output. This corpus expands on Habash et al. (2019)'s Arabic Parallel Gender Corpus (APGC v1.0) by adding second person targets as well as increasing the total number of sentences over 6.5 times, reaching over 590K words. Our new dataset will aid the research and development of gender identification, controlled text generation, and post-editing rewrite systems that could be used to personalize NLP applications and provide users with the correct outputs based on their grammatical gender preferences. We make the Arabic Parallel Gender Corpus (APGC v2.0) publicly available.
Bashar Alhafni
The Nine Books Of Arabic Hadith
[]
https://huggingface.co/datasets/arbml/Hadith
https://github.com/abdelrahmaan/Hadith-Data-Sets
unknown
2,020
ar
ar-CLS: (Arabic (Classic))
other
text
crawling and annotation(other)
There are two files of Hadith, the first one for all hadith With Tashkil and Without Tashkel from the Nine Books that are 62,169 Hadith. The second one it Hadith pre-processing data, which is applyed normalization and removeing stop words and lemmatization on it
62,169
documents
Low
nan
nan
nan
nan
Arab
No
GitHub
Free
nan
No
text classification, text Similarity
nan
nan
nan
nan
nan
nan
nan
Abdulrahman Kamar
Shamela et al Arabic Corpus
[]
nan
https://github.com/tarekeldeeb/GloVe-Arabic/tree/master/arabic_corpus
CC BY 4.0
2,018
ar
ar-CLS: (Arabic (Classic))
other
text
crawling
The arabic corpus {1.9B word} consists of the following resources: ShamelaLibrary348.7z link {1.15B} UN arabic corpus mirror1 mirror2 {0.37B} AraCorpus.tar.gz link {0.14B} Arabic Wikipedia Latest Articles Dump link {0.11B} Tashkeela-arabic-diacritized-text-utf8-0.3.zip link {0.07B} Arabic Tweets link {0.03B} watan-2004.7z link {0.01B}
1,754,541,204
tokens
Low
nan
nan
nan
nan
Arab
Yes
GitHub
Free
nan
No
text generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Tarek Eldeeb
Quran Speech: Imam + Users
[]
https://huggingface.co/datasets/arbml/quran_uthmani
https://github.com/tarekeldeeb/DeepSpeech-Quran/tree/master/data/quran
CC BY 4.0
2,019
ar
ar-CLS: (Arabic (Classic))
transcribed audio
spoken
crawling and annotation(other)
7 full Quran recitations + 18K filtered user recitation
61,000
sentences
Low
quran.ksu.edu.sa + tarteel.io
nan
nan
nan
Arab
No
GitHub
Free
nan
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Tarek Eldeeb
OntoNotes Release 5
[]
nan
https://catalog.ldc.upenn.edu/LDC2013T19
LDC User Agreement
2,013
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
other
The goal of the project was to annotate a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in three languages (English, Chinese, and Arabic) with structural information (syntax and predicate argument structure) and shallow semantics (word sense linked to an ontology and coreference).
300,000
tokens
Medium
LDC
nan
CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes
https://aclanthology.org/W12-4501.pdf
Arab
No
LDC
Free
nan
No
coreference resolution, word sense disambiguation, named entity recognition
SIGDAT
nan
workshop
Special Interest Group on Linguistic data and corpus-based approaches to NLP
nan
nan
nan
Amr Keleg
ArSarcasm-v2
[ { "Name": "Egyptian", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "2,981", "Unit": "sentences" }, { "Name": "Gulf", "Dialect": "ar-GLF: (Arabic (Gulf))", "Volume": "966", "Unit": "sentences" }, { "Name": "Levantine", "Dialect": "ar-LEV: (Arabic(Levant))", "Volume": "671", "Unit": "sentences" }, { "Name": "Maghrebi", "Dialect": "ar-MA: (Arabic (Morocco))", "Volume": "45", "Unit": "sentences" }, { "Name": "MSA", "Dialect": "ar-MSA: (Arabic (Modern Standard Arabic))", "Volume": "10,885", "Unit": "sentences" } ]
https://huggingface.co/datasets/arbml/ArSarcasm_v2
https://github.com/iabufarha/ArSarcasm-v2
MIT License
2,021
ar
mixed
social media
text
crawling and annotation(other)
ArSarcasm-v2 is an extension of the original ArSarcasm dataset published along with the paper From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset. ArSarcasm-v2 consists of ArSarcasm along with portions of DAICT corpus and some new tweets. Each tweet was annotated for sarcasm, sentiment and dialect. The final dataset consists of 15,548 tweets divided into 12,548 training tweets and 3,000 testing tweets. ArSarcasm-v2 was used and released as a part of the shared task on sarcasm detection and sentiment analysis in Arabic
15,548
sentences
Medium
Multiple institutions
ArSarcasm: https://github.com/iabufarha/ArSarcasm
Overview of the WANLP 2021 Shared Task on Sarcasm and Sentiment Detection in Arabic
https://aclanthology.org/2021.wanlp-1.36/
Arab
No
GitHub
Free
nan
Yes
sarcasm detection, sentiment analysis, dialect identification
WANLP
20.0
workshop
Arabic Natural Language Processing Workshop
Ibrahim Abu Farha, Wajdi Zaghouani, Walid Magdy
The University of Edinburgh, Hamad Bin Khalifa University
This paper provides an overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. The shared task has two subtasks: sarcasm detection (subtask 1) and sentiment analysis (subtask 2). This shared task aims to promote and bring attention to Arabic sarcasm detection, which is crucial to improve the performance in other tasks such as sentiment analysis. The dataset used in this shared task, namely ArSarcasm-v2, consists of 15,548 tweets labelled for sarcasm, sentiment and dialect. We received 27 and 22 submissions for subtasks 1 and 2 respectively. Most of the approaches relied on using and fine-tuning pre-trained language models such as AraBERT and MARBERT. The top achieved results for the sarcasm detection and sentiment analysis tasks were 0.6225 F1-score and 0.748 F1-PN respectively.
Ibrahim Abu Farha
AraCovid19-SSD: Arabic COVID-19 Sentiment and Sarcasm Detection Dataset
[]
nan
https://github.com/MohamedHadjAmeur/AraCovid19-SSD
CC BY-NC-SA 4.0
2,021
ar
mixed
social media
text
crawling and annotation(other)
AraCovid19-SSD is a manually annotated Arabic COVID-19 sarcasm and sentiment detection dataset containing 5,162 tweets.
5,162
sentences
High
Research Centre on Scientific and Technical Information (CERIST)
nan
ARACOVID19-SSD: ARABIC COVID-19 SENTIMENT AND SARCASM DETECTION DATASET
https://arxiv.org/pdf/2110.01948v1.pdf
Arab
No
other
Upon-Request
nan
No
sarcasm detection, sentiment detection
arXiv
nan
preprint
nan
Mohamed Seghir Hadj Ameur, Hassina Aliane
Research Centre on Scientific and Technical Information (CERIST)
Coronavirus disease (COVID-19) is an infectious respiratory disease that was first discovered in late December 2019, in Wuhan, China, and then spread worldwide causing a lot of panic and death. Users of social networking sites such as Facebook and Twitter have been focused on reading, publishing, and sharing novelties, tweets, and articles regarding the newly emerging pandemic. A lot of these users often employ sarcasm to convey their intended meaning in a humorous, funny, and indirect way making it hard for computer-based applications to automatically understand and identify their goal and the harm level that they can inflect. Motivated by the emerging need for annotated datasets that tackle these kinds of problems in the context of COVID-19, this paper builds and releases AraCOVID19-SSD1 a manually annotated Arabic COVID-19 sarcasm and sentiment detection dataset containing 5,162 tweets. To confirm the practical utility of the built dataset, it has been carefully analyzed and tested using several classification models.
Abdelrahman Kaseb
DiaLex
[ { "Name": "Algerian", "Dialect": "ar-DZ: (Arabic (Algeria))", "Volume": "607", "Unit": "sentences" }, { "Name": "Egyptian", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "588", "Unit": "sentences" }, { "Name": "Lebanese", "Dialect": "ar-LB: (Arabic (Lebanon))", "Volume": "633", "Unit": "sentences" }, { "Name": "Syrian", "Dialect": "ar-SY: (Arabic (Syria))", "Volume": "593", "Unit": "sentences" }, { "Name": "Tunisian", "Dialect": "ar-TN: (Arabic (Tunisia))", "Volume": "649", "Unit": "sentences" } ]
https://huggingface.co/datasets/arbml/dialex
https://github.com/UBC-NLP/dialex
unknown
2,021
ar
mixed
other
text
other
A Benchmark for Evaluating Multidialectal Arabic Word Embeddings
3,070
sentences
Low
Multiple institutions
nan
DiaLex: A Benchmark for Evaluating Multidialectal Arabic Word Embeddings
https://aclanthology.org/2021.wanlp-1.2
Arab
No
GitHub
Free
nan
No
benchmarking multidialectal word embeddings
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
Muhammad Abdul-Mageed, Shady Elbassuoni, Jad Doughman, AbdelRahim Elmadany, El Moatez Billah Nagoudi, Yorgo Zoughby, Ahmad Shaher, Iskander Gaba, Ahmed Helal, and Mohammed El-Razzaz.
nan
Word embeddings are a core component of modern natural language processing systems, making the ability to thoroughly evaluate them a vital task. We describe DiaLex, a benchmark for intrinsic evaluation of dialectal Arabic word embeddings. DiaLex covers five important Arabic dialects: Algerian, Egyptian, Lebanese, Syrian, and Tunisian. Across these dialects, DiaLex provides a testbank for six syntactic and semantic relations, namely male to female, singular to dual, singular to plural, antonym, comparative, and genitive to past tense. DiaLex thus consists of a collection of word pairs representing each of the six relations in each of the five dialects. To demonstrate the utility of DiaLex, we use it to evaluate a set of existing and new Arabic word embeddings that we developed. Beyond evaluation of word embeddings, DiaLex supports efforts to integrate dialects into the Arabic language curriculum. It can be easily translated into Modern Standard Arabic and English, which can be useful for evaluating word translation. Our benchmark, evaluation code, and new word embedding models will be publicly available.
Iskander Gaba
COVID-19 Disinfo: COVID-19 Disinformation Twitter Dataset
[]
https://huggingface.co/datasets/arbml/COVID_19_Disinformation_ar
https://github.com/firojalam/COVID-19-disinformation
CC BY-NC-SA 4.0
2,021
multilingual
mixed
social media
text
crawling and annotation(other)
With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic. Fighting this infodemic has been declared one of the most important focus areas of the World Health Organization, with dangers ranging from promoting fake cures, rumors, and conspiracy theories to spreading xenophobia and panic. Addressing the issue requires solving a number of challenging problems such as identifying messages containing claims, determining their check-worthiness and factuality, and their potential to do harm as well as the nature of that harm, to mention just a few. To address this gap, we release a large dataset of 16K manually annotated tweets for fine-grained disinformation analysis that focuses on COVID-19, combines the perspectives and the interests of journalists, fact-checkers, social media platforms, policy makers, and society, and covers Arabic, Bulgarian, Dutch, and English. Finally, we show strong evaluation results using pretrained Transformers, thus confirming the practical utility of the dataset in monolingual multilingual, and single task vs. multitask settings.
5,000
sentences
High
Multiple institutions
nan
Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society
https://pure.rug.nl/ws/portalfiles/portal/203339411/2021.findings_emnlp.56.pdf
Arab
No
GitHub
Free
nan
No
fact checking
Findings of EMNLP
nan
conference
findings of Conference on Empirical Methods in Natural Language Processing
Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, Abdulaziz Al-Homaid, Wajdi Zaghouani, Tommaso Caselli, Gijs Danoe, Friso Stolk, Britt Bruntink, Preslav Nakov
nan
With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic. Fighting this infodemic has been declared one of the most important focus areas of the World Health Organization, with dangers ranging from promoting fake cures, rumors, and conspiracy theories to spreading xenophobia and panic. Addressing the issue requires solving a number of challenging problems such as identifying messages containing claims, determining their check-worthiness and factuality, and their potential to do harm as well as the nature of that harm, to mention just a few. To address this gap, we release a large dataset of 16K manually annotated tweets for fine-grained disinformation analysis that focuses on COVID-19, combines the perspectives and the interests of journalists, fact-checkers, social media platforms, policy makers, and society, and covers Arabic, Bulgarian, Dutch, and English. Finally, we show strong evaluation results using pretrained Transformers, thus confirming the practical utility of the dataset in monolingual multilingual, and single task vs. multitask settings.
Abdelrahman Kaseb
Senti lex
[]
https://huggingface.co/datasets/senti_lex
https://www.kaggle.com/datasets/rtatman/sentiment-lexicons-for-81-languages
GPL-3.0
2,014
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the links between them
2,794
tokens
Low
Stony Brook University
nan
Building Sentiment Lexicons for All Major Languages
https://aclanthology.org/P14-2063.pdf
Arab
Yes
kaggle
Free
nan
No
sentiment analysis
ACL
186.0
conference
Assofications of computation linguisitcs
Yanqing Chen, S. Skiena
Stony Brook University
Sentiment analysis in a multilingual world remains a challenging problem, because developing language-specific sentiment lexicons is an extremely resourceintensive process. Such lexicons remain a scarce resource for most languages. In this paper, we address this lexicon gap by building high-quality sentiment lexicons for 136 major languages. We integrate a variety of linguistic resources to produce an immense knowledge graph. By appropriately propagating from seed words, we construct sentiment lexicons for each component language of our graph. Our lexicons have a polarity agreement of 95.7% with published lexicons, while achieving an overall coverage of 45.2%. We demonstrate the performance of our lexicons in an extrinsic analysis of 2,000 distinct historical figures’ Wikipedia articles on 30 languages. Despite cultural difference and the intended neutrality of Wikipedia articles, our lexicons show an average sentiment correlation of 0.28 across all language pairs.
Abdelrahman Kaseb
POLYGLOT-NER
[]
nan
https://www3.cs.stonybrook.edu/~polyglot/ner2/
unknown
2,014
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition. The details of the procedure of generating them is outlined in Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data corresponding to a different language. For example, "es" includes only spanish examples.
10,000,144
tokens
Low
Stony Brook University
nan
POLYGLOT-NER: Massive Multilingual Named Entity Recognition
https://arxiv.org/pdf/1410.3791.pdf
Arab-Latn
Yes
other
Free
nan
No
named entity recognition
arXiv
161.0
preprint
nan
Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, Steven Skiena
Stony Brook University
The increasing diversity of languages used on the web introduces a new level of complexity to Information Retrieval (IR) systems. We can no longer assume that textual content is written in one language or even the same language family. In this paper, we demonstrate how to build massive multilingual annotators with minimal human expertise and intervention. We describe a system that builds Named Entity Recognition (NER) annotators for 40 major languages using Wikipedia and Freebase. Our approach does not require NER human annotated datasets or language specific resources like treebanks, parallel corpora, and orthographic rules. The novelty of approach lies therein - using only language agnostic techniques, while achieving competitive performance. Our method learns distributed word representations (word embeddings) which encode semantic and syntactic features of words in each language. Then, we automatically generate datasets from Wikipedia link structure and Freebase attributes. Finally, we apply two preprocessing stages (oversampling and exact surface form matching) which do not require any linguistic expertise. Our evaluation is two fold: First, we demonstrate the system performance on human annotated datasets. Second, for languages where no gold-standard benchmarks are available, we propose a new method, distant evaluation, based on statistical machine translation.
Abdelrahman Kaseb
iSarcasmEval: SemEval-2022 Task 6
[]
https://huggingface.co/datasets/arbml/iSarcasmEval_task_A
https://github.com/iabufarha/iSarcasmEval
unknown
2,020
multilingual
mixed
social media
text
crawling and annotation(other)
A Dataset of Intended Sarcasm
4,447
sentences
High
University of Edinburgh
nan
iSarcasm: A Dataset of Intended Sarcasm
https://arxiv.org/pdf/1911.03123.pdf
Arab-Latn
No
GitHub
Free
nan
Yes
sarcasm detection
SEMEVAL
17.0
workshop
International Workshop on Semantic Evaluation
Silviu Oprea, Walid Magdy
University of Edinburgh
We consider the distinction between intended and perceived sarcasm in the context of textual sarcasm detection. The former occurs when an utterance is sarcastic from the perspective of its author, while the latter occurs when the utterance is interpreted as sarcastic by the audience. We show the limitations of previous labelling methods in capturing intended sarcasm and introduce the iSarcasm dataset of tweets labeled for sarcasm directly by their authors. Examining the state-of-the-art sarcasm detection models on our dataset showed low performance compared to previously studied datasets, which indicates that these datasets might be biased or obvious and sarcasm could be a phenomenon under-studied computationally thus far. By providing the iSarcasm dataset, we aim to encourage future NLP research to develop methods for detecting sarcasm in text as intended by the authors of the text, not as labeled under assumptions that we demonstrate to be sub-optimal.
Abdelrahman Kaseb
Arabic Hate Speech 2022
[]
nan
https://codalab.lisn.upsaclay.fr/competitions/2324
custom
2,022
ar
mixed
social media
text
crawling and annotation(other)
Fine-Grained Hate Speech Detection on Arabic Twitter
10,157
sentences
High
QCRI
nan
Emojis as Anchors to Detect Arabic Offensive Language and Hate Speech
https://arxiv.org/pdf/2201.06723.pdf
Arab
No
CodaLab
Free
nan
Yes
hate speech detection
OSACT
nan
workshop
Workshop on Open-Source Arabic Corpora and Processing Tools
Hamdy Mubarak, Sabit Hassan , and Shammur Absar Chowdhury
Qatar Computing Research Institute
We introduce a generic, language-independent method to collect a large percentage of offensive and hate tweets regardless of their topics or genres. We harness the extralinguistic information embedded in the emojis to collect a large number of offensive tweets. We apply the proposed method on Arabic tweets and compare it with English tweets -- analyzing some cultural differences. We observed a constant usage of these emojis to represent offensiveness in throughout different timelines in Twitter. We manually annotate and publicly release the largest Arabic dataset for offensive, fine-grained hate speech, vulgar and violence content. Furthermore, we benchmark the dataset for detecting offense and hate speech using different transformer architectures and performed in-depth linguistic analysis. We evaluate our models on external datasets -- a Twitter dataset collected using a completely different method, and a multi-platform dataset containing comments from Twitter, YouTube and Facebook, for assessing generalization capability. Competitive results on these datasets suggest that the data collected using our method captures universal characteristics of offensive language. Our findings also highlight the common words used in offensive communications; common targets for hate speech; specific patterns in violence tweets and pinpoints common classification errors due to the need to understand the context, consider culture and background and the presence of sarcasm among others.
Abdelrahman Kaseb
ArCorona: Analyzing Arabic Tweets in the Early Days of Coronavirus (COVID-19) Pandemic
[]
https://huggingface.co/datasets/arbml/ArCorona
https://alt.qcri.org/resources/ArCorona.tsv
unknown
2,020
ar
mixed
social media
text
crawling and annotation(other)
Collected to prevent spreading of rumors and misinformation about the virus or bad cures
8,000
sentences
High
QCRI
nan
ArCorona: Analyzing Arabic Tweets in the Early Days of Coronavirus (COVID-19) Pandemic
https://arxiv.org/abs/2012.01462
Arab
No
other
Free
nan
No
COVID misinformation detection
arXiv
7.0
preprint
nan
Hamdy Mubarak, Sabit Hassan
Qatar Computing Research Institute
Over the past few months, there were huge numbers of circulating tweets and discussions about Coronavirus (COVID-19) in the Arab region. It is important for policy makers and many people to identify types of shared tweets to better understand public behavior, topics of interest, requests from governments, sources of tweets, etc. It is also crucial to prevent spreading of rumors and misinformation about the virus or bad cures. To this end, we present the largest manually annotated dataset of Arabic tweets related to COVID-19. We describe annotation guidelines, analyze our dataset and build effective machine learning and transformer based models for classification.
Abdelrahman Kaseb
Adult Content Detection on Arabic Twitter: Analysis and Experiments
[]
nan
https://alt.qcri.org/resources/AdultContentDetection.zip
unknown
2,020
ar
mixed
social media
text
crawling and annotation(other)
Adult Content Detection on Arabic Twitter
50,000
sentences
High
QCRI
nan
Adult Content Detection on Arabic Twitter: Analysis and Experiments
https://aclanthology.org/2021.wanlp-1.14.pdf
Arab
No
other
Free
nan
Yes
adult language detection
arXiv
5.0
preprint
nan
Hamdy Mubarak, Sabit Hassan and Ahmed Abdelali
Qatar Computing Research Institute
With Twitter being one of the most popular social media platforms in the Arab region, it is not surprising to find accounts that post adult content in Arabic tweets; despite the fact that these platforms dissuade users from such content. In this paper, we present a dataset of Twitter accounts that post adult content. We perform an in-depth analysis of the nature of this data and contrast it with normal tweet content. Additionally, we present extensive experiments with traditional machine learning models, deep neural networks and contextual embeddings to identify such accounts. We show that from user information alone, we can identify such accounts with F1 score of 94.7% (macro average). With the addition of only one tweet as input, the F1 score rises to 96.8%.
Abdelrahman Kaseb
Understanding and Detecting Dangerous Speech in Social Media
[]
https://huggingface.co/datasets/arbml/Dangerous_Dataset
https://github.com/UBC-NLP/Arabic-Dangerous-Dataset
unknown
2,020
ar
mixed
social media
text
crawling and annotation(other)
Dangerous speech detection
5,000
sentences
High
The University of British Columbia
nan
Understanding and Detecting Dangerous Speech in Social Media
https://arxiv.org/pdf/2005.06608.pdf
Arab
No
GitHub
Free
nan
No
dangerous speech detection
arXiv
8.0
preprint
nan
Ali Alshehri, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed
The University of British Columbia
Social media communication has become a significant part of daily activity in modern societies. For this reason, ensuring safety in social media platforms is a necessity. Use of dangerous language such as physical threats in online environments is a somewhat rare, yet remains highly important. Although several works have been performed on the related issue of detecting offensive and hateful language, dangerous speech has not previously been treated in any significant way. Motivated by these observations, we report our efforts to build a labeled dataset for dangerous speech. We also exploit our dataset to develop highly effective models to detect dangerous content. Our best model performs at 59.60% macro F1, significantly outperforming a competitive baseline.
Abdelrahman Kaseb
APCD
[]
https://huggingface.co/datasets/arbml/APCD
https://hci-lab.github.io/LearningMetersPoems/
unknown
2,019
ar
ar-CLS: (Arabic (Classic))
other
text
crawling
A dataset of Arabic poetry containing 1,831,770 along with there meters.
1,831,770
sentences
Low
Helwan University
nan
Learning meters of Arabic and English poems with Recurrent Neural Networks: a step forward for language understanding and synthesis
https://arxiv.org/pdf/1905.05700.pdf
Arab
No
GitHub
Free
nan
No
meter classification
arXiv
nan
preprint
nan
Waleed A. Yousefa,Omar M. Ibrahime,Taha M. Madboulya, Moustafa A. Mahmoud
Nile university, Nile university, Nile university
Recognizing a piece of writing as a poem or prose is usually easy for the majority of people; however, only specialists can determine which meter a poem belongs to. In this paper, we build Recurrent Neural Network (RNN) models that can classify poems according to their meters from plain text. The input text is encoded at the character level and directly fed to the models without feature handcrafting. This is a step forward for machine understanding and synthesis of languages in general, and Arabic language in particular. Among the 16 poem meters of Arabic and the 4 meters of English the networks were able to correctly classify poem with an overall accuracy of 96.38% and 82.31% respectively. The poem datasets used to conduct this research were massive, over 1.5 million of verses, and were crawled from different nontechnical sources, almost Arabic and English literature sites, and in different heterogeneous and unstructured formats. These datasets are now made publicly available in clean, structured, and documented format for other future research. To the best of the authors’ knowledge, this research is the first to address classifying poem meters in a machine learning approach, in general, and in RNN featureless based approach, in particular. In addition, the dataset is the first publicly available dataset ready for the purpose of future computational research. Index Terms—Poetry, Meters, Al-’arud, Arabic, English, Recurrent Neural Networks, RNN, Deep Learning, Deep Neural Networks, DNN, Classification, Text Mining.
Zaid Alyafeai
IDRISI-R
[ { "Name": "Arabic gold", "Dialect": "mixed", "Volume": "4,593", "Unit": "sentences" }, { "Name": "Arabic silver", "Dialect": "mixed", "Volume": "1,187,123", "Unit": "sentences" }, { "Name": "English gold", "Dialect": "mixed", "Volume": "20,514", "Unit": "sentences" }, { "Name": "English silver", "Dialect": "mixed", "Volume": "56,682", "Unit": "sentences" } ]
nan
https://github.com/rsuwaileh/IDRISI
custom
2,022
multilingual
mixed
social media
text
crawling and annotation(other)
IDRISI-R is the largest-scale publicly-available Twitter Location Mention Recognition (LMR) dataset, in both English and Arabic languages. It contains 41 disaster events of different types such as floods, fires, etc. In addition to tagging LMs in text, the LMs are labeled for location types such as countries, cities, streets, POIs, etc.
1,268,912
sentences
Medium
Qatar University, QCRI, Hamad Bin Khalifa University
Kawarith and humAID datasets
(Under review) IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter
nan
Arab-Latn
Yes
GitHub
Free
nan
Yes
location mention recognition
nan
nan
journal
nan
Reem Suwaileh, Tamer Elsayed, Muhammad Imran
Computer Science and Engineering Department, College of Engineering, Qatar University, Doha, Qatar. Qatar Computing Research Institute (QCRI), Hamad Bin Khalifa University (HBKU), Doha, Qatar
While utilizing Twitter data for crisis management, a critical challenge that hinders authorities' response is the scarcity of geotagged messages. Although studies show the presence of toponyms in tweets and their effectiveness as alternative information to geotagged messages, limited focus has been given to location mention recognition in tweets. In fact, the community lacks a standard dataset to thrive research towards building robust models and solutions. To bridge this gap, we present two human-labeled datasets for the location mention recognition task in text messages, particularly tweets. The human annotation task labels toponym spans and assigns a location type (e.g., country, state, city) to them. The datasets contain tweets from 41 large-scale disaster events (e.g., floods, earthquakes) covering a wide geographical area of English and Arabic-speaking countries. Moreover, we benchmark the datasets using standard and deep learning models and present rigorous quantitative and qualitative analysis to highlight their superiority over past efforts. Last but not least, the trained models are used to process raw data comprising millions of tweets and offered as a silver dataset.
Reem Suwaileh
AFND
[]
https://huggingface.co/datasets/arbml/AFND
https://data.mendeley.com/datasets/67mhx6hhzd/1
CC BY 4.0
2,022
ar
mixed
news articles
text
crawling
AFND consists of 606912 public news articles that were scraped from 134 public news websites of 19 different Arab countries over a 6-month period using Python scripts.
606,912
documents
Low
Multiple Institutions
nan
nan
AFND: Arabic fake news dataset for the detection and classification of articles credibility
Arab
No
Mendeley Data
Free
nan
No
news credibility detection
Data in Brief
nan
journal
Data in Brief
Ashwaq Khalila, Moath Jarrah, Monther Aldwairi, ManarJaradat
Department of Computer Engineering, Jordan University of Science and Technology, Department of Computer Engineering, Jordan University of Science and Technology,College of Technological Innovation, Zayed University, Department of Computer Engineering, The Hashemite University
The news credibility detection task has started to gain more attention recently due to the rapid increase of news on different social media platforms. This article provides a large, labeled, and diverse Arabic Fake News Dataset (AFND) that is collected from public Arabic news websites. This dataset enables the research community to use supervised and unsupervised machine learning algorithms to classify the credibility of Arabic news articles. AFND consists of 606912 public news articles that were scraped from 134 public news websites of 19 different Arab countries over a 6-month period using Python scripts. The Arabic fact-check platform, Misbar, is used manually to classify each public news source into credible, not credible, or undecided. Weak supervision is applied to label news articles with the same label as the public source. AFND is imbalanced in the number of articles in each class. Hence, it is useful for researchers who focus on finding solutions for imbalanced datasets. The dataset is available in JSON format and can be accessed from Mendeley Data repository.
Zaid Alyafeai
AT-ODTSA
[]
https://huggingface.co/datasets/arbml/AT_ODSTA
https://github.com/sabudalfa/AT-ODTSA
unknown
2,022
ar
mixed
social media
text
crawling and annotation(other)
A dataset of Arabic Tweets for Open-Domain Targeted Sentiment Analysis, which includes Arabic tweets along with labels that specify targets (topics) and sentiments (opinions) expressed in the collected tweets.
3,000
sentences
Medium
Multiple Institutions
nan
AT-ODTSA: a Dataset of Arabic Tweets for Open Domain Targeted Sentiment Analysis
https://journal.uob.edu.bh/bitstream/handle/123456789/4607/IJCDS-1101105-1570749771.pdf
Arab
No
GitHub
Free
nan
No
open-domain targeted sentiment Analysis
IJCDS
nan
journal
International Journal of Computing and Digital Systems
Shaaban Sahmoud, Shadi Abudalfa, Wisam Elmasry
Department of Computer Engineering, Fatih Sultan Mehmet Vakif University, Information Technology Department, University College of Applied Sciences, 3Department of Computer Engineering, Istanbul Kultur University,
In the field of sentiment analysis, most of research has conducted experiments on datasets collected from Twitter for manipulating a specific language. Little number of datasets has been collected for detecting sentiments expressed in Arabic tweets. Moreover, very limited number of such datasets is suitable for conducting recent research directions such as target dependent sentiment analysis and open-domain targeted sentiment analysis. Thereby, there is a dire need for reliable datasets that are specifically acquired for open-domain targeted sentiment analysis with Arabic language. Therefore, in this paper, we introduce AT-ODTSA, a dataset of Arabic Tweets for Open-Domain Targeted Sentiment Analysis, which includes Arabic tweets along with labels that specify targets (topics) and sentiments (opinions) expressed in the collected tweets. To the best of our knowledge, our work presents the first dataset that manually annotated for applying Arabic open-domain targeted sentiment analysis. We also present a detailed statistical analysis of the dataset. The AT-ODTSA dataset is suitable for train numerous machine learning models such as a deep learning-based model.
Zaid Alyafeai
ArCovidVac
[]
https://huggingface.co/datasets/arbml/ArCovidVac
https://alt.qcri.org/resources/ArCovidVac.zip
unknown
2,022
ar
mixed
social media
text
crawling and annotation(other)
the largest manually annotated Arabic tweet dataset, ArCovidVac, for the COVID-19 vaccination campaign, covering many countries in the Arab region
10,000
sentences
High
QCRI
nan
ArCovidVac: Analyzing Arabic Tweets About COVID-19 Vaccination
https://arxiv.org/pdf/2201.06496.pdf
Arab
No
QCRI Resources
Free
nan
Yes
informativeness, text classification, stance detection
LREC
nan
conference
Language Resources and Evaluation Conference
Hamdy Mubarak, Sabit Hassan, Shammur Absar Chowdhury, Firoj Alam
Qatar Computing Research Institute, HBKU; University of Pittsburgh;Qatar Computing Research Institute, HBKU
The emergence of the COVID-19 pandemic and the first global infodemic have changed our lives in many different ways. We relied on social media to get the latest information about COVID-19 pandemic and at the same time to disseminate information. The content in social media consisted not only health related advise, plans, and informative news from policymakers, but also contains conspiracies and rumors. It became important to identify such information as soon as they are posted to make an actionable decision (e.g., debunking rumors, or taking certain measures for traveling). To address this challenge, we develop and publicly release the first largest manually annotated Arabic tweet dataset, ArCovidVac, for the COVID-19 vaccination campaign, covering many countries in the Arab region. The dataset is enriched with different layers of annotation, including, (i) Informativeness (more vs. less importance of the tweets); (ii) fine-grained tweet content types (e.g., advice, rumors, restriction, authenticate news/information); and (iii) stance towards vaccination (pro-vaccination, neutral, anti-vaccination). Further, we performed in-depth analysis of the data, exploring the popularity of different vaccines, trending hashtags, topics and presence of offensiveness in the tweets. We studied the data for individual types of tweets and temporal changes in stance towards vaccine. We benchmarked the ArCovidVac dataset using transformer architectures for informativeness, content types, and stance detection.
Zaid Alyafeai
APCD2
[]
https://huggingface.co/datasets/arbml/APCDv2
https://github.com/Gheith-Abandah/classify-arabic-poetry
unknown
2,020
ar
ar-CLS: (Arabic (Classic))
other
text
crawling
1657 k verses of poems and prose to develop neural networks to classify and diacritize Arabic poetry
1,831,770
sentences
Low
The University of Jordan
APCD
Classifying and diacritizing Arabic poems using deep recurrent neural networks
https://www.sciencedirect.com/science/article/pii/S1319157820305784/pdfft?md5=07be922e052bf43933bdb7bea5189718&pid=1-s2.0-S1319157820305784-main.pdf
Arab
No
GitHub
Free
nan
Yes
meter classification
nan
nan
journal
Journal of King Saud University - Computer and Information Sciences
Gheith A. Abandah, Mohammed Z. Khedher, Mohammad R. Abdel-Majeed, Hamdi M Mansour, Salma F Hulliel, Lara M Bisharata
School of Engineering, The University of Jordan,School of Engineering, The University of Jordan,School of Engineering, The University of Jordan,School of Arts, The University of Jordan, School of Engineering, The University of Jordan, School of Engineering, The University of Jordan
Poetry has a prominent history in Arabic literature. The classical Arabic poetry has 16 m that vary in rhythm and target purpose. Chanting a poem eloquently requires knowing the poem’s meter and obtaining a diacritized version of its verses (letters inscribed with their short vowels); diacritics are often not inscribed in Arabic texts. This work proposes solutions to classify input Arabic text into the 16 poetry meters and prose. It also investigates the automatic diacritization of Arabic poetry. We adopt machine learning approach using a large dataset of 1657 k verses of poems and prose to develop neural networks to classify and diacritize Arabic poetry. We propose deep and narrow recurrent neural networks with bidirectional long short-term memory cells for solving these problems. The proposed model classifies the input text with an average accuracy of 97.27%, which is significantly higher than previous work. We also propose a solution that achieves an accuracy that approaches 100% when multiple verses of the same poem are available through predicting the class from the aggregate probabilities of the multiple verses. Diacritizing poetry is much harder than diacritizing prose due to the poet’s meticulous selection of phrases and relaxation of some diacritization rules.
Zaid Alyafeai
Author Attribution Tweets
[]
https://huggingface.co/datasets/arbml/Author_Attribution_Tweets
https://fada.birzeit.edu/handle/20.500.11889/6743
unknown
2,021
ar
ar-MSA: (Arabic (Modern Standard Arabic))
social media
text
crawling and annotation(other)
consists of 71,397 tweets for 45 authors for MSA collected from twitter.
71,397
sentences
Medium
Birzeit University
nan
Authorship Attribution of Modern Standard Arabic Short Texts
https://fada.birzeit.edu/bitstream/20.500.11889/6787/1/AA_PAPER___ACM.pdf
Arab
No
other
Free
nan
Yes
authorship attribution
nan
nan
preprint
nan
YARA ABUHAMMAD, YARA ADDABE, NATALY AYYAD, ADNAN YAHYA
Department of Electrical and Computer Engineering, Birzeit University, Palestine, Department of Electrical and Computer Engineering, Birzeit University, Palestine, Department of Electrical and Computer Engineering, Birzeit University, Palestine, Department of Electrical and Computer Engineering, Birzeit University, Palestine
Text data, including short texts, constitute a major share of web content. The availability of this data to billions of users triggers frequent plagiarism attacks. Authorship Attribution (AA) seeks to identify the most probable author of a given text based on similarity to the writing style of potential authors. In this paper, we approach AA as a writing style profile generation process, where we group text instances for each author into a single profile. We use Twitter as the source for our short Modern Standard Arabic (MSA) texts. Numerous experiments with various training approaches, tools and features allowed us to settle on a text representation method that relies on text concatenation of Arabic tweets to form chunks, which are then duplicated to reach a precalculated length. These chunks are used to train machine learning models for our 45 author profiles. This allowed us to achieve accuracies up to 99%, which compares favorably with the best results reported in the literature
Zaid Alyafeai
Sa`7r
[]
https://huggingface.co/datasets/arbml/SaudiIrony
https://github.com/iwan-rg/Saudi-Dialect-Irony-Dataset
CC0
2,022
ar
ar-SA: (Arabic (Saudi Arabia))
social media
text
crawling and annotation(other)
The dataset was collected using Twitter API and it consists of 19,810 tweets, 8,089 of them are labeled as ironic tweets.
19,810
sentences
Medium
King Saud University
nan
Sa`7r: A Saudi Dialect Irony Dataset
nan
Arab
No
GitHub
Free
nan
No
irony detection
OSACT
nan
workshop
Open-Source Arabic Corpora and Processing Tools
Halah AlMazrua, Najla AlHazzani, Amaal AlDawod, Lama AlAwlaqi, Noura AlReshoudi, Hend Al-Khalifa and Luluh AlDhubayi
nan
In sentiment analysis, detecting irony is considered a major challenge. The key problem with detecting irony is the difficulty to recognize the implicit and indirect phrases which signifies the opposite meaning. In this paper, we present Sa`7r ساخرthe Saudi irony dataset, and describe our efforts in constructing it. The dataset was collected using Twitter API and it consists of 19,810 tweets, 8,089 of them are labeled as ironic tweets. We trained several models for irony detection task using machine learning models and deep learning models. The machine learning models include: K-Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM), and Naïve Bayes (NB). While the deep learning models include BiLSTM and AraBERT. The detection results show that among the tested machine learning models, the SVM outperformed other classifiers with an accuracy of 0.68. On the other hand, the deep learning models achieved an accuracy of 0.66 in the BiLSTM model and 0.71 in the AraBERT model. Thus, the AraBERT model achieved the most accurate result in detecting irony phrases in Saudi Dialect.
Zaid Alyafeai
Arabic Hate Speech 2022 Shared Task
[]
https://huggingface.co/datasets/arbml/Arabic_Hate_Speech
https://sites.google.com/view/arabichate2022/home
custom
2,022
ar
mixed
social media
text
crawling and annotation(other)
largest Arabic dataset for offensive, fine-grained hate speech, vulgar and violence content
12,698
sentences
High
QCRI
nan
Emojis as Anchors to Detect Arabic Offensive Language and Hate Speech
https://arxiv.org/pdf/2201.06723.pdf
Arab
No
QCRI Resources
Free
nan
Yes
offensive language detection, hate speech detection
arXiv
nan
preprint
nan
nan
Hamdy Mubarak, Sabit Hassan , and Shammur Absar Chowdhury
We introduce a generic, language-independent method to collect a large percentage of offensive and hate tweets regardless of their topics or genres. We harness the extralinguistic information embedded in the emojis to collect a large number of offensive tweets. We apply the proposed method on Arabic tweets and compare it with English tweets – analysing key cultural differences. We observed a constant usage of these emojis to represent offensiveness throughout different timespans on Twitter. We manually annotate and publicly release the largest Arabic dataset for offensive, fine-grained hate speech, vulgar and violence content. Furthermore, we benchmark the dataset for detecting offensiveness and hate speech using different transformer architectures and perform in-depth linguistic analysis. We evaluate our models on external datasets – a Twitter dataset collected using a completely different method, and a multi-platform dataset containing comments from Twitter, YouTube and Facebook, for assessing generalization capability. Competitive results on these datasets suggest that the data collected using our method captures universal characteristics of offensive language. Our findings also highlight the common words used in offensive communications, common targets for hate speech, specific patterns in violence tweets; and pinpoint common classification errors that can be attributed to limitations of NLP models. We observe that even state-ofthe-art transformer models may fail to take into account culture, background and context or understand nuances present in real-world data such as sarcasm.
Zaid Alyafeai
xquad
[]
https://huggingface.co/datasets/xquad
https://github.com/deepmind/xquad
CC BY-SA 4.0
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Consequently, the dataset is entirely parallel across 11 languages.
1,190
documents
Low
DeepMind
SQuAD
On the Cross-lingual Transferability of Monolingual Representations
https://aclanthology.org/2020.acl-main.421.pdf
Arab
No
GitHub
Free
nan
Yes
question answering
ACL
nan
conference
Association of Computation Linguistics
Mikel Artetxe†, Sebastian Ruder, Dani Yogatama
HiTZ Center, University of the Basque Country; DeepMind, DeepMind
State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot crosslingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective—freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.
Zaid Alyafeai
mC4
[]
https://huggingface.co/datasets/mc4
https://www.tensorflow.org/datasets/catalog/c4#c4multilingual_nights_stay
CC BY 4.0
2,019
multilingual
mixed
other
text
crawling
A colossal, cleaned version of Common Crawl's web crawl corpus.
53,256,040
documents
Low
Google
C4
A colossal, cleaned version of Common Crawl's web crawl corpus.
https://arxiv.org/pdf/1910.10683.pdf
Arab
No
other
Free
nan
Yes
text generation, language modeling
JMLR
nan
journal
Journal of Machine Learning Research
Colin Raffel, Noam Shazeer, Adam Roberts, Ktherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu
Goolge; Goolge; Goolge; Goolge; Goolge; Goolge; Goolge;
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code
Zaid Alyafeai
opus100
[]
https://huggingface.co/datasets/opus100
https://data.statmt.org/opus-100-corpus/v1.0/
unknown
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k.
1,040,000
sentences
Low
University of Edinburgh
nan
Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation
https://arxiv.org/pdf/2004.11867.pdf
Arab
No
other
Free
nan
Yes
machine translation
arXiv
nan
preprint
nan
Biao Zhang, Philip Williams, Ivan Titov, Rico Sennrich
School of Informatics, University of Edinburgh;School of Informatics, University of EdinburghSchool of Informatics, University of Edinburgh; Department of Computational Linguistics, University of Zurich
Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations. In this paper, we explore ways to improve them. We argue that multilingual NMT requires stronger modeling capacity to support language pairs with varying typological characteristics, and overcome this bottleneck via language-specific components and deepening NMT architectures. We identify the off-target translation issue (i.e. translating into a wrong target language) as the major source of the inferior zero-shot performance, and propose random online backtranslation to enforce the translation of unseen training language pairs. Experiments on OPUS-100 (a novel multilingual dataset with 100 languages) show that our approach substantially narrows the performance gap with bilingual models in both oneto-many and many-to-many settings, and improves zero-shot performance by ∼10 BLEU, approaching conventional pivot-based methods
Zaid Alyafeai
CoVoST 2
[]
https://huggingface.co/datasets/covost2
https://github.com/facebookresearch/covost
CC0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
crawling and annotation(other)
a large-scale multilingual ST corpus based on Common Voice, to foster ST research with the largest ever open dataset. Its latest version covers translations from English into 15 languages---Arabic, Catalan, Welsh, German, Estonian, Persian, Indonesian, Japanese, Latvian, Mongolian, Slovenian, Swedish, Tamil, Turkish, Chinese
6
hours
Low
Facebook AI
Common Voice
CoVoST 2 and Massively Multilingual Speech-to-Text Translation
https://arxiv.org/pdf/2007.10310.pdf
Arab
No
GitHub
Free
nan
Yes
speech recognition
arXiv
nan
preprint
nan
Changhan Wang, Anne Wu, Juan Pino
Facebook AI;Facebook AI;Facebook AI
Speech-to-text translation (ST) has recently become an increasingly popular topic of research, partly due to the development of benchmark datasets. Nevertheless, current datasets cover a limited number of languages. With the aim to foster research in massive multilingual ST and ST for low resource language pairs, we release CoVoST 2, a large-scale multilingual ST corpus covering translations from 21 languages into English and from English into 15 languages. This represents the largest open dataset available to date from total volume and language coverage perspective. Data sanity checks provide evidence about the quality of the data, which is released under CC0 license. We also provide extensive speech recognition, bilingual and multilingual machine translation and ST baselines with open-source implementation
Zaid Alyafeai
News Commentary
[]
https://huggingface.co/datasets/news_commentary
https://opus.nlpl.eu/News-Commentary.php
unknown
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
other
A parallel corpus of News Commentaries provided by WMT for training SMT
200,000
sentences
Low
OPUS
WMT 19
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
Yes
machine translation
LREC
nan
conference
Language Resources and Evaluation Conference
Jorg Tiedemann
Department of Linguistics and Philology Uppsala University, Uppsala/Sweden
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
XGLUE
[]
https://huggingface.co/datasets/xglue
https://github.com/microsoft/XGLUE
CC BY 4.0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to cross-lingual natural language understanding and generation. The training data of each task is in English while the validation and test data is present in multiple different languages. The following table shows which languages are present as validation and test data for each config.
10,000
sentences
Low
Microsoft
Universal Dependencies, MLQA, XNLI
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
https://arxiv.org/pdf/2004.01401.pdf
Arab
No
GitHub
Free
nan
Yes
part of speech tagging, question answering, natural language inference
arXiv
nan
preprint
nan
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou,Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti,Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zho
microsoft
In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks. Comparing to GLUE (Wang et al., 2019), which is labeled in English for natural language understanding tasks only, XGLUE has two main advantages: (1) it provides 11 diversified tasks that cover both natural language understanding and generation scenarios; (2) for each task, it provides labeled data in multiple languages. We extend a recent cross-lingual pre-trained model Unicoder (Huang et al., 2019) to cover both understanding and generation tasks, which is evaluated on XGLUE as a strong baseline. We also evaluate the base versions (12-layer) of Multilingual BERT, XLM and XLM-R for comparison
Zaid Alyafeai
TED TALKS IWSLT
[]
https://huggingface.co/datasets/ted_talks_iwslt
https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download
CC BY-NC 4.0
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
crawling
The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform.
19,670,000
tokens
Low
Fondazione Bruno Kessler
TED
WIT3 : Web Inventory of Transcribed and Translated Talks
https://aclanthology.org/2012.eamt-1.60.pdf
Arab
No
Gdrive
Free
nan
Yes
speech recognition
EAMT
nan
conference
European Association for Machine Translation
Mauro Cettolo, Christian Girardi, Marcello Federico
FBK – Fondazione Bruno Kessler Trento, Italy
We describe here a Web inventory named WIT3 that offers access to a collection of transcribed and translated talks. The core of WIT3 is the TED Talks corpus, that basically redistributes the original content published by the TED Conference website (http://www.ted.com). Since 2007, the TED Conference, based in California, has been posting all video recordings of its talks together with subtitles in English and their translations in more than 80 languages. Aside from its cultural and social relevance, this content, which is published under the Creative Commons BYNC-ND license, also represents a precious language resource for the machine translation research community, thanks to its size, variety of topics, and covered languages. This effort repurposes the original content in a way which is more convenient for machine translation researchers.
Zaid Alyafeai
Universal Dependencies
[ { "Name": "ar_nyuad", "Dialect": "nan", "Volume": "738,889", "Unit": "tokens" }, { "Name": "ar_padt", "Dialect": "nan", "Volume": "282,384", "Unit": "tokens" }, { "Name": "ar_pud", "Dialect": "nan", "Volume": "20,751", "Unit": "tokens" } ]
https://huggingface.co/datasets/universal_dependencies
https://github.com/UniversalDependencies
unknown
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages.
1,042,000
sentences
Low
Universal Dependencies(UD)
UDP (UDP-NYUAD), PADT, PUD
nan
nan
Arab
Yes
other
Free
nan
Yes
parts of speech tagging, morphological features, and syntactic dependencies
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Wojood
[]
nan
https://ontology.birzeit.edu/Wojood/
custom
2,022
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
Wojood consists of about 550K tokens (MSA and dialect) that are manually annotated with 21 entity types (e.g., person, organization, location, event, date, etc). It covers multiple domains and was annotated with nested entities. The corpus contains about 75K entities and 22.5% of which are nested.
550,000
tokens
Low
Birzeit University
nan
Wojood: Nested Arabic Named Entity Corpus and Recognition using BERT
https://arxiv.org/pdf/2205.09651.pdf
Arab
No
other
Upon-Request
nan
Yes
named entity recognition
arXiv
nan
preprint
nan
Mustafa Jarrar, Mohammed Khalilia, Sana Ghanem
Birzeit University
This paper presents Wojood, a corpus for Arabic nested Named Entity Recognition (NER). Nested entities occur when one entity mention is embedded inside another entity mention. Wojood consists of about 550K Modern Standard Arabic (MSA) and dialect tokens that are manually annotated with 21 entity types including person, organization, location, event and date. More importantly, the corpus is annotated with nested entities instead of the more common flat annotations. The data contains about 75K entities and 22.5% of which are nested. The inter-annotator evaluation of the corpus demonstrated a strong agreement with Cohen’s Kappa of 0.979 and an F1-score of 0.976. To validate our data, we used the corpus to train a nested NER model based on multi-task learning using the pre-trained AraBERT (Arabic BERT). The model achieved an overall micro F1-score of 0.884. Our corpus, the annotation guidelines, the source code and the pre-trained model are publicly available.
Zaid Alyafeai
KDE4
[]
https://huggingface.co/datasets/kde4
https://opus.nlpl.eu/KDE4.php
custom
2,012
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
A parallel corpus of KDE4 localization files (v.2). 92 languages, 4,099 bitexts
700,000
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
nan
conference
Language Resources and Evaluation Conference
Jorg Tiedemann
Department of Linguistics and Philology Uppsala University, Uppsala/Sweden
nan
Zaid Alyafeai
Wikipedia
[]
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/
CC BY-SA 3.0
2,022
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language.
1,151,628
documents
Low
Wikimedia
nan
nan
nan
Arab
No
other
Free
nan
No
text generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
WikiQAar
[]
https://huggingface.co/datasets/wiki_qa_ar
https://github.com/qcri/WikiQAar
unknown
2,018
ar
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
WIKIQAar is a bilingual English--Arabic Question Answering corpus built on top of WIKIQA
3,047
sentences
Low
QCRI
WikiQA
WIKIQA: A Challenge Dataset for Open-Domain Question Answering
https://aclanthology.org/D15-1237.pdf
Arab
No
GitHub
Free
nan
No
question answering
EMNLP
nan
conference
Empirical Methods in Natural Language Processing
Yi Yang, Wen-tau Yih Christopher Meek
Georgia Institute of Technology, Microsoft, Microsoft
We describe the WIKIQA dataset, a new publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. Most previous work on answer sentence selection focuses on a dataset created using the TREC-QA data, which includes editor-generated questions and candidate answer sentences selected by matching content words in the question. WIKIQA is constructed using a more natural process and is more than an order of magnitude larger than the previous dataset. In addition, the WIKIQA dataset also includes questions for which there are no correct sentences, enabling researchers to work on answer triggering, a critical component in any QA system. We compare several systems on the task of answer sentence selection on both datasets and also describe the performance of a system on the problem of answer triggering using the WIKIQA dataset.
Zaid Alyafeai
Khaleej-2004
[]
https://huggingface.co/datasets/arbml/khaleej_2004
https://sourceforge.net/projects/arabiccorpus/files/
unknown
2,004
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
Extracted from the daily Arabic news paper Akhbar al Khaleej, it includes 5120 news articles corresponding to 2,855,069 words covering four topics sport, local news, international news and economy
5,690
documents
Low
INRIA
nan
Comparison of Topic Identification methods for Arabic Language
https://hal.inria.fr/inria-00000448/document
Arab
No
sourceforge
Free
nan
No
topic classification
nan
nan
preprint
nan
M. Abbas and K. Smaili
INRIA-LORIA
In this paper we present two well-known methods for topic identification. The first one is a TFIDF classifier approach, and the second one is a based machine learning approach which is called Support Vector Machines (SVM). In our knowledge, we do not know several works on Arabic topic identification. So that we decide to investigate in this article. The corpus we used is extracted from the daily Arabic newspaper it Akhbar Al Khaleej, it includes 5120 news articles corresponding to 2.855.069 words covering four topics : sport, local news, international news and economy. According to our experiments, the results are encouraging both for SVM and TFIDF classifier, however we have noticed the superiority of the SVM classifier and its high capability to distinguish topics.
Zaid Alyafeai
CCAligned
[]
https://huggingface.co/datasets/ccaligned_multilingual
https://opus.nlpl.eu/CCAligned.php
unknown
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents.
1,219,374
sentences
Low
Multiple Institutions
nan
CCAligned: A Massive Collection of Cross-lingual Web-Document Pairs
http://aclanthology.lst.uni-saarland.de/2020.emnlp-main.480.pdf
Arab
No
other
Free
nan
No
machine translation
EMNLP
nan
conference
Empirical Methods in Natural Language Processing
Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmá, Philipp Koehn
Facebook AI; Facebook AI; Facebook AI; Johns Hopkins University
Cross-lingual document alignment aims to identify pairs of documents in two distinct languages that are of comparable content or translations of each other. In this paper, we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5% across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. In addition to curating this massive dataset, we introduce baseline methods that leverage crosslingual representations to identify aligned documents based on their textual content. Finally, we demonstrate the value of this parallel documents dataset through a downstream task of mining parallel sentences and measuring the quality of machine translations from models trained on this mined data. Our objective in releasing this dataset is to foster new research in cross-lingual NLP across a variety of low, medium, and high-resource languages
Zaid Alyafeai
Watan-2004
[]
https://huggingface.co/datasets/arbml/watan_2004
https://sourceforge.net/projects/arabiccorpus/files/
unknown
2,010
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
Watan-2004 corpus contains about 20000 articles talking about the six following topics "categories": Culture, Religion, Economy, Local News, International News and sports. In this corpus, punctuation has been omitted intentionally in order to make it useful for Language Modeling.
20,000
sentences
Low
Multiple institutions
nan
Comparing TR-Classifier and KNN by using Reduced Sizes of Vocabularies
https://hal.archives-ouvertes.fr/hal-01586533/document
Arab
No
other
Free
nan
No
topic classification
CITALA
nan
conference
International Conference on Arabic Language Processing
M. Abbas, K. Smaili, and D. Berkani
CRSTDLA /Speech Processing Laboratory;NRIA-LORIA/Parole team, Villers les Nancy;NPS/ Signal and Communication laboratory
The aim of this study is topic identification by using two methods, in this case, a new one that we have proposed: TR-classifier which is based on computing triggers, and the well-known k Nearest Neighbors. Performances are acceptable, particularly for TR-classifier, though we have used reduced sizes of vocabularies. For the TR-Classifier, each topic is represented by a vocabulary which has been built using the corresponding training corpus. Whereas, the kNN method uses a general vocabulary, obtained by the concatenation of those used by the TR-Classifier. For the evaluation task, six topics have been selected to be identified: Culture, religion, economy, local news, international news and sports. An Arabic corpus has been used to achieve experiments.
Zaid Alyafeai
CCMatrix
[]
https://huggingface.co/datasets/yhavinga/ccmatrix
https://github.com/facebookresearch/LASER/tree/main/tasks/CCMatrix
unknown
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
80 languages, we were able to mine 10.8 billion parallel sentences, out of which only 2.9 billion are aligned with English
196,000,000
sentences
Low
Facebook
nan
CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
https://arxiv.org/pdf/1911.04944.pdf
Arab
No
other
Free
nan
No
machine translation
arXiv
nan
preprint
nan
Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin
Facebook AI
We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totalling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 4.5 billions parallel sentences, out of which 661 million are aligned with English. 20 language pairs have more then 30 million parallel sentences, 112 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages. To evaluate the quality of the mined bitexts, we train NMT systems for most of the language pairs and evaluate them on TED, WMT and WAT test sets. Using our mined bitexts only and no human translated parallel data, we achieve a new state-of-the-art for a single system on the WMT’19 test set for translation between English and German, Russian and Chinese, as well as German/French. In particular, our English/German system outperforms the best single one by close to 4 BLEU points and is almost on pair with best WMT’19 evaluation system which uses system combination and back-translation. We also achieve excellent results for distant languages pairs like Russian/Japanese, outperforming the best submission at the 2019 workshop on Asian Translation (WAT).
Zaid Alyafeai
CrossSum
[]
https://huggingface.co/datasets/csebuetnlp/CrossSum
https://github.com/csebuetnlp/CrossSum
CC BY-NC-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
a large-scale dataset comprising 1.65 million cross-lingual article-summary samples in 1500+ language-pairs
72,795
documents
Low
Multiple Institutions
nan
CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs
https://arxiv.org/pdf/2112.08804.pdf
Arab
No
GitHub
Free
nan
No
summarization
arXiv
nan
preprint
preprint
Tahmid Hasan, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, Rifat Shahriyar
Bangladesh University of Engineering and Technology (BUET), University of California, Los Angeles, Monash University, Swinburne University of Technology
We present CrossSum, a large-scale dataset comprising 1.65 million cross-lingual articlesummary samples in 1500+ language-pairs constituting 45 languages. We use the multilingual XL-Sum dataset and align identical articles written in different languages via crosslingual retrieval using a language-agnostic representation model. We propose a multi-stage data sampling algorithm and fine-tune mT5, a multilingual pretrained model, with explicit cross-lingual supervision with CrossSum and introduce a new metric for evaluating crosslingual summarization. Results on established and our proposed metrics indicate that models fine-tuned on CrossSum outperforms summarization+translation baselines, even when the source and target language pairs are linguistically distant. To the best of our knowledge, CrossSum is the largest cross-lingual summarization dataset and also the first-ever that does not rely on English as the pivot language. We are releasing the dataset, alignment and training scripts, and the models to spur future research on cross-lingual abstractive summarization. The resources can be found at https: //github.com/csebuetnlp/CrossSum.
Zaid Alyafeai
Opus Wikipedia
[]
https://huggingface.co/datasets/opus_wikipedia
https://data.statmt.org/cc-100/
unknown
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek
1,000,000
sentences
Low
OPUS
nan
Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs
https://www.sciencedirect.com/science/article/pii/S2212017314005453
Arab
No
other
Free
nan
No
machine translation
PT
nan
journal
Procedia Technology
Krzysztof Wołk, Krzysztof Marasek
Polish Japanese Institute of Information Technology, Warsaw, Poland
Parallel sentences are a relatively scarce but extremely useful resource for many applications including cross-lingual retrieval and statistical machine translation. This research explores our methodology for mining such data from previously obtained comparable corpora. The task is highly practical since non-parallel multilingual data exist in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Here we propose a web crawling method for building subject-aligned comparable corpora from Wikipedia articles. We also introduce a method for extracting truly parallel sentences that are filtered out from noisy or just comparable sentence pairs. We describe our implementation of a specialized tool for this task as well as training and adaption of a machine translation system that supplies our filter with additional information about the similarity of comparable sentence pairs.
Zaid Alyafeai
SaudiNewsNet
[]
https://huggingface.co/datasets/saudinewsnet
https://github.com/inparallel/SaudiNewsNet
CC BY-NC-SA 4.0
2,015
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers.
31,030
documents
Low
-
nan
nan
nan
Arab
No
GitHub
Free
nan
No
language modeling, text generation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
OffensEval 2020
[]
https://huggingface.co/datasets/strombergnlp/offenseval_2020
https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission
CC BY 4.0
2,019
multilingual
mixed
social media
text
crawling and annotation(other)
The Arabic dataset consists of 10,000 tweets collected in April–May 2019 using the Twitter API with the language filter set to Arabic: lang:ar.
10,000
sentences
High
Multiple Institutions
Arabic OSACT4
SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020)
https://aclanthology.org/2020.semeval-1.188.pdf
Arab
No
other
Free
nan
Yes
offensive language detection
SemEval
nan
conference
Semantic Evaluation
Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, Çağrı Çöltekin
nan
We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.
Zaid Alyafeai
arwiki
[]
https://huggingface.co/datasets/CALM/arwiki
https://huggingface.co/datasets/CALM/arwiki
unknown
2,022
ar
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
This dataset is extracted using wikiextractor tool, from Wikipedia Arabic pages.
1,136,455
documents
Low
CALM
nan
nan
nan
Arab
No
other
Free
nan
No
text generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
UniMorph
[]
https://huggingface.co/datasets/universal_morphologies
https://github.com/unimorph/ara
CC BY-SA 3.0
2,015
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
167 languages have been annotated according to the UniMorph schema.
140003
tokens
Low
Johns Hopkins University
nan
The Composition and Use of the Universal Morphological Feature Schema (UniMorph Schema)
https://unimorph.github.io/doc/unimorph-schema.pdf
Arab
No
GitHub
Free
nan
No
morphological analysis
nan
nan
preprint
nan
John Sylak-Glassman
Center for Language and Speech Processing Johns Hopkins University
nan
Zaid Alyafeai
CC-100
[]
https://huggingface.co/datasets/cc100
https://data.statmt.org/cc-100/
unknown
2,020
ar
mixed
other
text
crawling
monolingual datasets from Common Crawl for a variety of languages
7,132,000
documents
Low
Facebook
Common Crawl
CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
https://aclanthology.org/2020.lrec-1.494.pdf
Arab
No
other
Free
nan
No
text generation, language modeling
LREC
nan
conference
Language Resources and Evaluation Conference
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave
Facebook AI
Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia
Zaid Alyafeai
WiLI-2018
[]
https://huggingface.co/datasets/wili_2018
https://zenodo.org/record/841984#.YpBRIahBxD8
ODbL-1.0
2,018
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
WiLI-2018, the Wikipedia language identification benchmark dataset, contains 235000 paragraphs of 235 languages. The dataset is balanced and a train-test split is provided.
1,000
sentences
Low
-
nan
The WiLI benchmark dataset for written language identification
https://arxiv.org/pdf/1801.07779.pdf
Arab
No
zenodo
Free
nan
Yes
language identification
arXiv
nan
preprint
nan
Martin Thoma
nan
This paper describes the WiLI-2018 benchmark dataset for monolingual written natural language identification. WiLI-2018 is a publicly available,1 free of charge dataset of short text extracts from Wikipedia. It contains 1000 paragraphs of 235 languages, totaling in 235 000 paragraphs. WiLI is a classification dataset: Given an unknown paragraph written in one dominant language, it has to be decided which language it is.
Zaid Alyafeai
MMAC
[]
nan
http://www.ashrafraouf.com/mmac
unknown
2,010
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
The multi-modal Arabic corpus contains 6 million Arabic words selected from various sources covering old Arabic, religious texts, traditional language, modern language, different specialisations and very modern material from online “chat rooms.”
6,000,000
tokens
Low
Multiple Institutions
nan
Building a multi-modal Arabic corpus (MMAC)
https://link.springer.com/content/pdf/10.1007/s10032-010-0128-2.pdf
Arab
No
other
Free
nan
No
dictionary
IJDAR
nan
journal
International Journal on Document Analysis and Recognition
Ashraf AbdelRaouf, Colin A. Higgins, Tony Pridmore, Mahmoud Khalil
Misr International University, The University of Nottingham, The University of Nottingham, , Ain Shams University
Traditionally, a corpus is a large structured set of text, electronically stored and processed. Corpora have become very important in the study of languages. They have opened new areas of linguistic research, which were unknown until recently. Corpora are also key to the development of optical character recognition (OCR) applications. Access to a corpus of both language and images is essential during OCR development, particularly while training and testing a recognition application. Excellent corpora have been developed for Latin-based languages, but few relate to the Arabic language. This limits the penetration of both corpus linguistics and OCR in Arabic-speaking countries. This paper describes the construction and provides a comprehensive study and analysis of a multi-modal Arabic corpus (MMAC) that is suitable for use in both OCR development and linguistics. MMAC currently contains six million Arabic words and, unlike previous corpora, also includes connected segments or pieces of Arabic words (PAWs) as well as naked pieces of Arabic words (NPAWs) and naked words (NWords); PAWs and Words without diacritical marks. Multi-modal data is generated from both text, gathered from a wide variety of sources, and images of existing documents. Text-based data is complemented by a set of artificially generated images showing each of the Words, NWords, PAWs and NPAWs involved. Applications are provided to generate a natural-looking degradation to the generated images. A ground truth annotation is offered for each such image, while natural images showing small paragraphs and full pages are augmented with representations of the text they depict. A statistical analysis and verification of the dataset has been carried out and is presented. MMAC was also tested using commercial OCR software and is publicly and freely available.
Zaid Alyafeai
XOR-TyDi QA
[]
https://huggingface.co/datasets/xor_tydi_qa
https://nlp.cs.washington.edu/xorqa/index.html
CC BY-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections.
5,235
sentences
Low
Multiple Institutions
TYDIQA
XOR QA: Cross-lingual Open-Retrieval Question Answering
https://arxiv.org/pdf/2010.11856.pdf
Arab
No
other
Free
nan
Yes
open-retrieval question answering
arXiv
nan
preprint
nan
Akari Asai, Jungo Kasai, Jonathan H. Clark,Kenton Lee, Eunsol Choi, Hannaneh Hajishirzi
University of Washington, University of Washington, Google Research, The University of Texas at Austin; Allen Institute for AI
Multilingual question answering tasks typically assume that answers exist in the same language as the question. Yet in practice, many languages face both information scarcity—where languages have few reference articles—and information asymmetry—where questions reference concepts from other cultures. This work extends open-retrieval question answering to a cross-lingual setting enabling questions from one language to be answered via answer content from another language. We construct a large-scale dataset built on 40K information-seeking questions across 7 diverse non-English languages that TYDI QA could not find same-language answers for. Based on this dataset, we introduce a task framework, called Cross-lingual OpenRetrieval Question Answering (XOR QA), that consists of three new tasks involving crosslingual document retrieval from multilingual and English resources. We establish baselines with state-of-the-art machine translation systems and cross-lingual pretrained models. Experimental results suggest that XOR QA is a challenging task that will facilitate the development of novel techniques for multilingual question answering. Our data and code are available at https://nlp.cs.washington. edu/xorqa/.
Zaid Alyafeai
Multilingual LAMA
[]
https://huggingface.co/datasets/m_lama
https://github.com/norakassner/mlama
CC BY-NC 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
machine translation
multilingual version of lama. The underlying idea of LAMA is to query knowledge from pretrained LMs using templates without any finetuning
19,354
sentences
Low
LMU Munich
nan
Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
https://arxiv.org/pdf/2102.00894.pdf
Arab
No
GitHub
Free
nan
Yes
lm probing
arXiv
nan
preprint
nan
Nora Kassner, Philipp Dufter, Hinrich Schutze
Center for Information and Language Processing (CIS), LMU Munich
Recently, it has been found that monolingual English language models can be used as knowledge bases. Instead of structural knowledge base queries, masked sentences such as “Paris is the capital of [MASK]” are used as probes. We translate the established benchmarks TREx and GoogleRE into 53 languages. Working with mBERT, we investigate three questions. (i) Can mBERT be used as a multilingual knowledge base? Most prior work only considers English. Extending research to multiple languages is important for diversity and accessibility. (ii) Is mBERT’s performance as knowledge base language-independent or does it vary from language to language? (iii) A multilingual model is trained on more text, e.g., mBERT is trained on 104 Wikipedias. Can mBERT leverage this for better performance? We find that using mBERT as a knowledge base yields varying performance across languages and pooling predictions across languages improves performance. Conversely, mBERT exhibits a language bias; e.g., when queried in Italian, it tends to predict Italy as the country of origin.
Zaid Alyafeai
infopankki v1
[]
https://huggingface.co/datasets/opus_infopankki
https://opus.nlpl.eu/infopankki-v1.php
unknown
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
A parallel corpus of 12 languages, 66 bitexts.
63,000
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
nan
conference
Language Resources and Evaluation Conference
Jorg Tiedemann
Department of Linguistics and Philology Uppsala University
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
United Nations General Assembly Resolutions
[]
https://huggingface.co/datasets/un_ga
https://opus.nlpl.eu/UN.php
unknown
2,009
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
This is a collection of translated documents from the United Nations originally compiled into a translation memory by Alexandre Rafalovitch, Robert Dale
73,000
sentences
Low
OPUS
nan
United Nations General Assembly Resolutions: A Six-Language Parallel Corpus
https://aclanthology.org/2009.mtsummit-posters.15.pdf
Arab
No
other
Free
nan
No
machine translation
mtsummit
nan
conference
Machine Translation Summit XII
Alexandre Rafalovitch, Robert Dale
United Nations; Centre for Language Technology Macquarie University
In this paper we describe a six-ways parallel public-domain corpus consisting of 2100 United Nations General Assembly Resolutions with translations in the six official languages of the United Nations, with an average of around 3 million tokens per language. The corpus is available in a preprocessed, formatting-normalized TMX format with paragraphs aligned across multiple languages. We describe the background to the corpus and its content, the process of its construction, and some of its interesting properties.
Zaid Alyafeai
X-CSR
[]
https://huggingface.co/datasets/xcsr
https://inklab.usc.edu//XCSR/xcsr_datasets
unknown
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
machine translation
automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR
1,300
sentences
Low
University of Southern California
nan
Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning
https://arxiv.org/pdf/2106.06937.pdf
Arab
No
other
Upon-Request
nan
Yes
commonsense reasoning
arXiv
nan
preprint
nan
Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, Xiang Ren
Department of Computer Science and Information Sciences Institute, University of Southern California
Commonsense reasoning research has so far been mainly limited to English. We aim to evaluate and improve popular multilingual language models (ML-LMs) to help advance commonsense reasoning (CSR) beyond English. We collect the Mickey corpus, consisting of 561k sentences in 11 different languages, which can be used for analyzing and improving ML-LMs. We propose Mickey Probe, a language-agnostic probing task for fairly evaluating the common sense of popular ML-LMs across different languages. Also, we create two new datasets, X-CSQA and XCODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning. To improve the performance beyond English, we propose a simple yet effective method — multilingual contrastive pretraining (MCP). It significantly enhances sentence representations, yielding a large performance gain on both benchmarks (e.g., +2.7% accuracy for X-CSQA over XLM-RL).
Zaid Alyafeai
CommonLanguage
[]
https://huggingface.co/datasets/anton-l/common_language
https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
CC BY 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
other
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
1
hours
Low
SpeechBrain
CommonVoice
nan
nan
Arab
No
GitHub
Free
nan
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Tatoeba Translation Challenge
[]
https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt
https://github.com/Helsinki-NLP/Tatoeba-Challenge/
CC BY-NC-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The Tatoeba Translation Challenge is a multilingual data set of machine translation benchmarks derived from user-contributed translations collected by Tatoeba.org and provided as parallel corpus from OPUS. This dataset includes test and development data sorted by language pair. It includes test sets for hundreds of language pairs and is continuously updated. Please, check the version number tag to refer to the release that your are using.
1,064,096,596
sentences
Low
Language Technology at the University of Helsinki
Tatoeba
nan
nan
Arab
No
GitHub
Free
nan
Yes
machine translation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
MFQA
[]
https://huggingface.co/datasets/clips/mqa
https://huggingface.co/datasets/clips/mqa
CC0
2,021
multilingual
mixed
web pages
text
crawling
MQA is a Multilingual corpus of Questions and Answers (MQA) parsed from the Common Crawl. Questions are divided in two types: Frequently Asked Questions (FAQ) and Community Question Answering (CQA).
3,017,456
sentences
Low
University of Antwerp
Common Crawl
MFAQ: a Multilingual FAQ Dataset
https://arxiv.org/pdf/2109.12870.pdf
Arab
No
HuggingFace
Free
nan
No
frequently asked questions, question answering
arXiv
nan
preprint
nan
Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, Walter Daelemans
CLiPS Research Center University of Antwerp
In this paper, we present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. Although this is significantly larger than existing FAQ retrieval datasets, it comes with its own challenges: duplication of content and uneven distribution of topics. We adopt a similar setup as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) and test various bi-encoders on this dataset. Our experiments reveal that a multilingual model based on XLM-RoBERTa (Conneau et al., 2019) achieves the best results, except for English. Lower resources languages seem to learn from one another as a multilingual model achieves a higher MRR than language-specific ones. Our qualitative analysis reveals the brittleness of the model on simple word changes. We publicly release our dataset1 , model2 and training script
Zaid Alyafeai
OpenSubtitles
[]
https://huggingface.co/datasets/open_subtitles
https://opus.nlpl.eu/OpenSubtitles.php
unknown
2,016
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
This is a new collection of translated movie subtitles from http://www.opensubtitles.org/.
83,600,000
sentences
Low
OPUS
nan
OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles
http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
nan
conference
Language Resources Evaluation Conference
Pierre Lison, Jorg Tiedemann
University of Oslo
We present a new major release of the OpenSubtitles collection of parallel corpora. The release is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages. The release also incorporates a number of enhancements in the preprocessing and alignment of the subtitles, such as the automatic correction of OCR errors and the use of meta-data to estimate the quality of each subtitle and score subtitle pairs.
Zaid Alyafeai
OSCAR Small
[]
https://huggingface.co/datasets/nthngdy/oscar-small
https://huggingface.co/datasets/nthngdy/oscar-small
CC0
2,022
multilingual
mixed
web pages
text
other
a processed version of and smaller subset of OSCAR
408,438
documents
Low
-
OSCAR
nan
nan
Arab
No
HuggingFace
Free
nan
Yes
language modeling, text generation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
GNOME
[]
https://huggingface.co/datasets/opus_gnome
https://opus.nlpl.eu/GNOME.php
unknown
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
A parallel corpus of GNOME localization files
800,000
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
nan
conference
Language Resources Evaluation Conference
Jorg Tiedemann
Department of Linguistics and Philology Uppsala University
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
OPUS Wikipedia
[]
https://huggingface.co/datasets/opus_wikipedia
https://opus.nlpl.eu/Wikipedia.php
unknown
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
This is a corpus of parallel sentences extracted from Wikipedia
1,000,000
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
nan
conference
Language Resources Evaluation Conference
nan
Jorg Tiedemann ¨
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
BnL Historical Newspapers
[]
https://huggingface.co/datasets/bnl_newspapers
https://data.bnl.lu/data/historical-newspapers/
CC0
2,022
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
other
The BnL has digitised over 800.000 pages of Luxembourg newspapers
1
documents
Low
BnL
nan
nan
nan
Arab
No
other
Free
nan
No
text generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Microsoft Terminology Collection
[]
https://huggingface.co/datasets/ms_terms
https://www.microsoft.com/en-us/language/terminology
custom
2,022
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange.
20,000
sentences
Low
Microsoft
nan
nan
nan
Arab
No
other
Free
nan
No
machine translation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
ArQAT-AQI: Answerable Question Identification in Arabic Tweets
[]
nan
https://www.dropbox.com/sh/coba3b1nqkyloa8/AAC4Sk5WQvtXZRgH5liBkMiGa?dl=0
unknown
2,017
ar
mixed
social media
text
other
Answerable Question Identification in Arabic Tweets
13,252
sentences
Medium
-
nan
nan
nan
Arab
No
Dropbox
Free
nan
No
answerable questions
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Troll Detection
[]
nan
https://www.dropbox.com/s/hqab7kp2zyex01h/Trolls%20Dataset.zip?dl=0
unknown
2,020
ar
mixed
social media
text
crawling and annotation(other)
Trolls detection in Tweets
128
sentences
Medium
-
nan
nan
nan
Arab
No
Dropbox
Free
nan
No
trolls detection
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
ArTest
[]
nan
https://www.dropbox.com/s/openq7fgt3kd6jg/Artest-Test-Collection.zip?dl=0
unknown
2,020
ar
mixed
web pages
text
crawling and annotation(other)
ArTest was built on top of ArabicWeb'16 Web collection. If you are interested in getting the collection, please check our ArabicWeb16 Website
10,529
sentences
Low
-
ArabicWeb16
ArTest: The First Test Collection for Arabic Web Search with Relevance Rationales
nan
Arab
No
Dropbox
Free
nan
No
relevance judgments, judgments rationale
SIGIR
nan
conference
Special Interest Group on Information Retrieval
Maram Hasanain, Yassmine Barkallah, Reem Suwaileh, Mucahid Kutlu, Tamer Elsayed
Multiple Institutions
The scarcity of Arabic test collections has long hindered information retrieval (IR) research over the Arabic Web. In this work, we present ArTest, the first large-scale test collection designed for the evaluation of ad-hoc search over the Arabic Web. ArTest uses ArabicWeb16, a collection of around 150M Arabic Web pages as the document collection, and includes 50 topics, 10,529 relevance judgments, and (more importantly) a rationale behind each judgment. To our knowledge, this is also the first IR test collection that includes rationales of primary assessors (i.e., topic developers) for their relevance judgments, exhibiting a useful resource for understanding the relevance phenomena. Finally, ArTest is made publicly-available for the research community.
Zaid Alyafeai
ArMATH
[]
https://huggingface.co/datasets/arbml/ArMATH
https://github.com/reem-codes/ArMATH
unknown
2,022
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
There are 6000 samples and 883 templates. A template is an equation once the variables have been replaced with ordered placeholders.
6,000
sentences
Low
-
nan
ArMATH: a Dataset for Solving Arabic Math Word Problems
nan
Arab
No
GitHub
Free
nan
Yes
math solving
LREC
nan
conference
Language Resources and Evaluation Conference
Reem Ali Alghamdi, Zhenwen Liang and Xiangliang Zhang
nan
nan
Zaid Alyafeai
ArabScribe
[]
nan
https://camel.abudhabi.nyu.edu/arabscribe/
custom
2,017
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
other
The ArabScribe dataset contains 10,000 transcriptions of Arabic words with both Roman and Arabic keyboards based on audio impressions of native and non-native speakers of Arabic.
3,234
tokens
Low
NYU Abu Dhabi
nan
Robust Dictionary Lookup in Multiple Noisy Orthographies
https://aclanthology.org/W17-1315.pdf
Arab
No
CAMeL Resources
Free
nan
No
dictionary
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
Lingliang Zhang, Nizar Habash and Godfried Toussaint
NYU
We present the MultiScript Phonetic Search algorithm to address the problem of language learners looking up unfamiliar words that they heard. We apply it to Arabic dictionary lookup with noisy queries done using both the Arabic and Roman scripts. Our algorithm is based on a computational phonetic distance metric that can be optionally machine learned. To benchmark our performance, we created the ArabScribe dataset, containing 10,000 noisy transcriptions of random Arabic dictionary words. Our algorithm outperforms Google Translate’s “did you mean" feature, as well as the Yamli smart Arabic keyboard.
Zaid Alyafeai
Arabic-News
[]
https://huggingface.co/datasets/arbml/Arabic_News
https://github.com/motazsaad/Arabic-News
unknown
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
Arabic News for language modeling collected from BBC Arabic EuroNews Aljazeera CNN Arabic RT Arabic
713,134
documents
Low
-
nan
nan
nan
Arab
No
GitHub
Free
nan
No
text generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Arabic-Stories-Corpus
[]
https://huggingface.co/datasets/arbml/Arabic_Stories_Corpus
https://github.com/motazsaad/Arabic-Stories-Corpus
unknown
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
web pages
text
crawling
Arabic Stories Corpus collected from mawdoo3
146
documents
Low
-
nan
nan
nan
Arab
No
GitHub
Free
nan
No
story generation, language modeling
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Aghlat
[]
nan
https://github.com/linuxscout/aghlat
unknown
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
Arabic misspelling corpus
331
tokens
Low
-
nan
nan
nan
Arab
No
GitHub
Free
nan
No
misspelling detection, misspelling correction
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Annotated tweet corpus in Arabizi, French and English
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-W0323/
Non Commercial Use - ELRA END USER
2,022
multilingual
mixed
social media
text
crawling and annotation(other)
In total, 17,103 sequences were annotated from 585,163 tweets (196,374 in English, 254,748 in French and 134,041 in Arabizi), including the themes “Others” and “Incomprehensible”. Among these sequences, 4,578 sequences having at least 20 tweets annotated with the 3 predefined themes (Hooliganism, Racism and Terrorism) were obtained, including 1,866 sequences with an opinion change. They are distributed as follows: 2,141 sequences in English (57,655 tweets), 1,942 sequences in French (48,854 tweets) and 495 sequences in Arabizi (21,216 tweets). A sub-corpus of 8,733 tweets (1,209 in English, 3,938 in French and 3,585 in Arabizi) annotated as “hateful”, according to topic/opinion annotations and by selecting tweets that contained insults, is also provided.
134,041
sentences
High
ELDA
nan
nan
nan
Latn
No
ELRA
Upon-Request
nan
No
topic classification, theme classification, sentiment analysis
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Arabic dictionary of inflected words
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0098/
Non Commercial Use - ELRA END USER
2,017
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The Arabic dictionary of inflected words consists of a list of 6 million inflected forms, fully vowelized, generated in compliance with the grammatical rules of Arabic and tagged with grammatical information which includes POS and grammatical features, including number, gender, case, definiteness, tense, mood and compatibility with clitic agglutination.
6,000,000
tokens
Low
-
nan
nan
nan
Arab
No
ELRA
With-Fee
4,500.00€
No
lexicon analysis, part of speech tagging
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Arabic Morphological Dictionary
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0088/
Non Commercial Use - ELRA END USER
2,012
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The Arabic Morphological Dictionary contains 4,912,749 entries, including: - 3,374,852 nouns, - 1,537,699 verbs, - 198 grammatical words.
4,912,749
tokens
Low
-
nan
nan
nan
Arab
No
ELRA
With-Fee
450.00€
No
morphological analysis
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
ArabLEX: Database of Arabic General Vocabulary (DAG)
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0131/
Non Commercial Use - ELRA END USER
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
A comprehensive full-form lexicon of Arabic general vocabulary including all inflected, conjugated and cliticized forms. Each entry is accompanied by a rich set of morphological, grammatical, and phonological attributes. Ideally suited for NLP applications, DAG provides precise phonemic transcriptions and full vowel diacritics designed to enhance Arabic speech technology.
87,930,738
sentences
Low
ELRA
nan
nan
nan
Arab
No
ELRA
With-Fee
42,000.00€
No
morphological analysis, phonological analysis, grammatical analysis
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
A-SpeechDB
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0315/
Non Commercial Use - ELRA END USER
2,011
ar
ar-EG: (Arabic (Egypt))
transcribed audio
spoken
other
A-SpeechDB© is an Arabic speech database suited for training acoustic models for Arabic phoneme-based speaker-independent automatic speech recognition systems. The database contains about 20 hours of continuous speech recorded through one desktop omni microphone by 205 native speakers from Egypt (about 30% of females and 70% of males), aged between 20 and 45.
20
hours
Low
-
nan
nan
nan
Arab
No
ELRA
With-Fee
1,000.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
PhraseBank: Collins Multilingual database (MLD)
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-T0377/
Non Commercial Use - ELRA END USER
2,016
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
other
The PhraseBank consists of 2,000 phrases in 28 languages (Arabic, Chinese, Croatian, Czech, Danish, Dutch, American English, British English, Farsi, Finnish, French, German, Greek, Hindi, Italian, Japanese, Korean, Norwegian, Polish, Portuguese (Iberian), Portuguese (Brazilian), Russian, Spanish (Iberian), Spanish (Latin American), Swedish, Thai, Turkish, Vietnamese). Phrases are organised under 12 main topics and 67 subtopics. Covered topics are: talking to people, getting around, accommodation, shopping, leisure, communications, practicalities, health and beauty, eating and drinking, time.
2,000
sentences
Low
-
nan
nan
nan
Arab
No
ELRA
With-Fee
2,240.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Comprehensive Word Lists for Chinese, Japanese, Korean and Arabic
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-M0071/
Non Commercial Use - ELRA END USER
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
Comprehensive monolingual word lists for both Simplified and Traditional Chinese, Japanese, Korean and Arabic, including a full-form Arabic word list. For Simplified and Traditional Chinese, Japanese and Korean, we provide readings as well, making them ideal for speech-related applications such as speech synthesis. The two Arabic databases include both vocalized and romanized Arabic.
nan
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
37,500.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
An-Nahar Newspaper Text Corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-W0027/
Non Commercial Use - ELRA END USER
2,001
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
The An-Nahar Lebanon Newspaper Text Corpus comprises articles in standard Arabic from 1995 to 2000 (6 years) stored as HTML files on CDRom media. Each year contains 45 000 articles and 24 million words.
45,000
documents
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
3,024.00€
No
language modeling, text generation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Database of Arabic Plurals
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0121/
Non Commercial Use - ELRA END USER
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
This database covers both regular and irregular Arabic plurals, and was developed by experts over a period of several years. The data includes various grammatical attributes such as part-of-speech, collectivity codes, gender codes, and full vocalization.
nan
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
1,875.00€
No
grammatical analysis, gender identification, speech recognition, part of speech tagging
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Database of Arab Names
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0122/
Non Commercial Use - ELRA END USER
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
Very comprehensive database of Arabic personal names and name variants mapped to the original Arabic script with a large variety of supplementary information. The database consists of 6,500,000 terms.
6,500,000
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
11,250.00€
No
part of speech tagging
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
MSDA dataset
[]
nan
https://msda.um6p.ma/msda_datasets
CC BY 2.0
2,021
ar
mixed
social media
text
crawling and annotation(other)
tweets anotated for sentiment analysis and topic detection
50,000
sentences
Medium
MSDA-UM6P
nan
An open access NLP dataset for Arabic dialects : Data collection, labeling, and model construction
https://arxiv.org/abs/2102.11000
Arab
No
OneDrive
Free
nan
No
sentiment analysis, topic classification
nan
nan
nan
nan
nan
nan
nan
saad benjelloun
Dialectal Arabic Code-Switching Dataset
[]
https://huggingface.co/datasets/arbml/Dialectal_Speech_Code_Switching
https://github.com/qcri/Arabic_speech_code_switching
MIT License
2,020
ar
ar-EG: (Arabic (Egypt))
transcribed audio
text
human translation
The dataset studies code-switching between Egyptian and modern standard Arabic in broadcast domain.
2
hours
Medium
Qatar Computing Research Institute
ADI-5
Effects of Dialectal Code-Switching on Speech Modules: A Study using Egyptian Arabic Broadcast Speech
http://www.interspeech2020.org/uploadfile/pdf/Wed-1-10-5.pdf
Arab
No
GitHub
Free
nan
No
word-level code switching, code switching
nan
5.0
nan
nan
Chowdhury, Shammur Absar and Samih, Younes and Eldesouki, Mohamed and Ali, Ahmed
nan
The intra-utterance code-switching (CS) is defined as the alternation between two or more languages within the same utterance. Despite the fact that spoken dialectal code-switching (DCS) is more challenging than CS, it remains largely unexplored. In this study, we describe a method to build the first spoken DCS corpus. The corpus is annotated at the token-level minding both linguistic and acoustic cues for dialectal Arabic. For detailed analysis, we study Arabic automatic speech recognition (ASR), Arabic dialect identification (ADI), and natural language processing (NLP) modules for the DCS corpus. Our results highlight the importance of lexical information for discriminating the DCS labels. We observe that the performance of different models is highly dependent on the degree of code-mixing at the token-level as well as its complexity at the utterance-level.
Nouamane Tazi
Database of Foreign Names in Arabic
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-L0124/
Non Commercial Use - ELRA END USER
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
This database covers non-Arabic names, their Arabic equivalents, and Arabic script variants for each name (with the most important variant given first).
nan
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
3,750.00€
No
machine translation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
DixAF: Bilingual Dictionary French Arabic, Arabic French
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-M0040/
Non Commercial Use - ELRA END USER
2,004
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
DixAF (Dictionnaire bilingue français arabe, arabe français - Bilingual Dictionary French Arabic, Arabic French) is a joint ownership of CNRS/ENS lettres et sciences humaines. It was developed by Mr Fathi Debili, a CNRS officer, and it consists of around 125,000 binary links between ca. 43,800 French entries and ca. 35,000 Arabic entries.
35,000
sentences
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
18,000.00€
No
machine translation
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
LC-STAR: Standard Arabic Phonetic lexicon
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0247/
Non Commercial Use - ELRA END USER
2,007
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The lexicon comprises 110,271 entries, distributed over three categories: - a set of 52,981 common word entries. This set is extracted from a corpus of more than 13 million words distributed over 6 different domains (sports/games, news, finance, culture/entertainment, consumer information, personal communications). This was done with the aim of reaching a target for each domain of at least 95% self coverage. In addition to extracting word lists from the corpus, a list of closed set (function) word classes are included in the final word list. - a set of 50,135 proper names (including person names, family names, cities, streets, companies and brand names) divided into 3 domains. Multiple word names such as New_York are kept together in all three domains, and they count as one entry. The 3 domains consist of first and last names (9,738 different entries), place names (22,998 different entries), and organisations (17,309 different entries). - and a list of 7,155 special application words translated from English terms defined by the LC-STAR consortium. This list contains: numbers, letters, abbreviations and specific vocabulary for applications controlled by voice (information retrieval, controlling of consumer devices, etc.).
110,271
tokens
Low
European Commission
nan
nan
nan
Arab
No
ELRA
With-Fee
27,625.00€
No
machine translation, speech recognition, lexicon analysis
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
MArSUM: Moroccan Articles Summarisation
[]
https://huggingface.co/datasets/arbml/MArSum
https://github.com/KamelGaanoun/MoroccanSummarization
CC BY 4.0
2,022
ar
ar-MA: (Arabic (Morocco))
news articles
text
crawling
MArSUM is the first open corpus destinated for Moroccan dialect text summarization. The articles are retrieved from the GOUD.ma website and filtered to retain only Moroccan dialect. We have compiled a corpus of almost 20k articles with their titles.
20,000
sentences
Low
INSEA-Morocco (Institut Nationale de Statistiques et d'Economie Appliquée)
nan
Automatic Text Summarization for Moroccan Arabic Dialect Using an Artificial Intelligence Approach
https://link.springer.com/chapter/10.1007/978-3-031-06458-6_13
Arab
No
GitHub
Free
nan
Yes
summarization
CBI'22
nan
conference
International Conference of Business Intelligence
Kamel Gaanoun, Abdou Mohamed Naira, Anass Allak, Imade Benelallam
INSEA, AIOX Labs
A major advantage of artificial intelligence is its ability to automatically perform tasks at a human-like level quickly; this is needed in many fields, and more particularly in Automatic Text Summarization (ATS). Several advances related to this technique were made in recent years for both extractive and abstractive approaches, notably with the advent of sequence-to-sequence (seq2seq) and Transformers-based models. In spite of this, the Arabic language is largely less represented in this field, due to its complexity and a lack of datasets for ATS. Although some ATS works exist for Modern Standard Arabic (MSA), there is a lack of ATS works for the Arabic dialects that are more prevalent on social networking platforms and the Internet in general. Intending to take an initial step toward meeting this need, we present the first work of ATS concerning the Moroccan dialect known as Darija. This paper introduces the first dataset intended for the summarization of articles written in Darija. In addition, we present state-of-the-art results based on the ROUGE metric for extractive methods based on BERT embeddings and K-MEANS clustering, as well as abstractive methods based on Transformers models.
Kamel GAANOUN
Le Monde Diplomatique: Arabic tagged corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-W0049/
Non Commercial Use - ELRA END USER
2,009
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
This corpus contains 102,960 vowelized, lemmatized and tagged words (
102,960
tokens
Low
nan
nan
nan
Arab
Yes
ELRA
With-Fee
400.00€
No
grammatical analysis, morphological analysis
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
Multilingual Dictionary of Sports: – English-French-Arabic trilingual database
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-T0372_04/
Non Commercial Use - ELRA END USER
2,009
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
This dictionary was produced within the French national project EuRADic (European and Arabic Dictionaries and Corpora), as part of the Technolangue programme funded by the French Ministry of Industry.
40,000
tokens
Low
French Ministry of Industry
nan
nan
nan
Arab
No
ELRA
With-Fee
200.00€
No
dictionary
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NE3L: named entities Arabic corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-W0078/
Non Commercial Use - ELRA END USER
2,014
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
The NE3L project (Named Entities 3 Languages) consisted in annotating several corpora with different languages with named entities. Text format data were extracted from newspapers and deal with various topics. 3 different languages were annotated: Arabic, Chinese and Russian.
103,363
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
5,000.00€
No
named entity recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NEMLAR: Broadcast News Speech Corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0219/
Non Commercial Use - ELRA END USER
2,006
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
other
he Nemlar Broadcast News Speech Corpus consists of about 40 hours of Standard Arabic news broadcasts. The broadcasts were recorded from four different radio stations: Medi1, Radio Orient, RMC – Radio Monte Carlo, RTM – Radio Television Maroc. Each broadcast contains between 25 and 30 minutes of news and interviews (259 distinct speakers identified). The recordings were carried out at three different periods between 30 June 2002 and 18 July 2005. All files were recorded in linear PCM format, 16 kHz, 16 bit.
40
hours
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
300.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NEMLAR: Speech Synthesis Corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0220/
Non Commercial Use - ELRA END USER
2,006
ar
ar-EG: (Arabic (Egypt))
transcribed audio
spoken
other
The NEMLAR Speech Synthesis Corpus contains the recordings of 2 native Egyptian Arabic speakers (male and female, 35 and 27 years old respectively) recorded in a studio over 2 channels (voice + laryngograph). The recordings comprise more than 10 hours of data with transcriptions.
10
hours
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
1,000.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NEMLAR: Written Corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-W0042/
Non Commercial Use - ELRA END USER
2,006
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The NEMLAR Written Corpus consists of about 500,000 words of Arabic text from 13 different categories, aiming to achieve a well-balanced corpus that offers a representation of the variety in syntactic, semantic and pragmatic features of modern Arabic language.
500,000
tokens
Low
nan
nan
nan
Arab
No
ELRA
With-Fee
300.00€
No
lexical analysis, part of speech tagging
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NetDC Arabic BNSC: Broadcast News Speech Corpus
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0157/
Non Commercial Use - ELRA END USER
2,007
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
other
The database contains ca. 22.5 hours of broadcast news speech recorded from Radio Orient (France) during a 3-month period between November 2001 and January 2002 (37 broadcast news, including 32 from the 5.55 pm news and 5 from the 10.55 pm news, with about 90 distinct speakers identified)
22.5
hours
Low
ELDA
nan
nan
nan
Arab
No
ELRA
With-Fee
200.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
NAFIS: Normalized Arabic Fragments for Inestimable Stemming
[]
nan
https://catalog.elra.info/en-us/repository/browse/ELRA-W0127/
Non Commercial Use - ELRA END USER
2,018
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The corpus has the following characteristics: • 37 sentences • The average length of sentences is 5,05 words, with the longest being 10 words • Declarative, interrogative, imperative and exclamatory sentences accounted for 37,84%, 32,43%, 16,22% and 13,51% respectively • 154 tokens with 5,95 solutions as an average number of stemming solutions
154
tokens
Low
nan
nan
nan
Arab
Yes
ELRA
Free
nan
No
stemming
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai
OrienTel Egypt MCA: Modern Colloquial Arabic database
[]
nan
https://catalogue.elra.info/en-us/repository/browse/ELRA-S0221/
Non Commercial Use - ELRA END USER
2,006
ar
ar-EG: (Arabic (Egypt))
transcribed audio
spoken
manual curation
The OrienTel Egypt MCA (Modern Colloquial Arabic) database comprises 750 Egyptian speakers (398 males, 352 females) recorded over the Egyptian fixed and mobile telephone network.
18,571
sentences
Low
OrienTel
nan
nan
nan
Arab
No
ELRA
With-Fee
22,500.00€
No
speech recognition
nan
nan
nan
nan
nan
nan
nan
Zaid Alyafeai